July 04, 2015

Matthew Gertner - AllPeersHow to get a cheap holiday deal [Technorati links]

July 04, 2015 09:54 AM

Booking a holiday is always an exciting time but it can also be a time when people worry about the amount of money they will spend. Working hard all year in order to be able to pay for a week or two away from the hustle and bustle of city life means that everyone wants to get the most for their money.

sea-beach-holiday-vacation

There’s nothing better than picking up a great deal that leaves more money in the wallet to spend once you reach your destination. But how can people find these great deals? Are there some ‘tricks of the trade’ that can be used to save money when booking? The simple answer is yes there are!

Here are three fantastic little tips that will help you save money when it comes to booking your next holiday or short break away from home. It’s easy to find cheap holidays if you know how!

Pick a destination where you get more for your money

One of the easiest ways to save money is to head to a place where you get a great exchange rate for your home currency. There are plenty of countries that offer fantastic holidays for all types of travellers. South East Asia is always a good option because you will find that you get a lot more for your money when it comes to accommodation and food. You will also save money if you choose somewhere close to home even though the exchange rate might not be as favourable, for example if you live in the UK then a short trip across the sea to France will save you a lot of money because it is so cheap to get there.

Book early or very late

According to experts in the travel industry, the best time for you to get a great deal is to book around 11 months before you plan to go on holiday. This is due to the fact that if you book 11 months in advance there are still a lot of cheap rooms and promotion rates available for flights. If you’re flexible and do not have your heart set on a particular destination then you can leave booking to the last minute. If you can hold out until around 8 weeks before you want to go away  you will often find some amazing last minute deals that will suit any budget.

Holiday at home

This is something that a lot of people don’t even consider, but how many fantastic places are there to visit in your home country? The answer is lots! So cut out the cost of flights, transport and exchange rates by finding somewhere in your own country. It makes thing easier and much less stressful for you because communication, culture and haggling are never a problem. You can also rest easily safe in the knowledge that you will most likely get the best deal available.

The post How to get a cheap holiday deal appeared first on All Peers.

July 02, 2015

Courion7 Ways to Reduce your Cyber Attack Surface [Technorati links]

July 02, 2015 03:06 PM

Access Risk Management Blog | Courion

7 ways to reduce attack surface branded from Courion Corporation

blog.courion.com

June 30, 2015

CourionTech Tuesday - From Encryption to BYOD Security [Technorati links]

June 30, 2015 01:32 PM

Access Risk Management Blog | Courion

encryption

Pita Bread Helps Researchers Steal Encryption Keys

In possibly the most delicious hack ever, a team of Israeli security researchers at Tel Aviv University have developed a way of stealing encryption keys using a cheap radio sniffer and a piece of pita bread. Truly a sight to see.

Lee Munson, NakedSecurity.com

 

Polish airline, hit by cyber-attack, says all carriers are at risk

Flight delays just got a little more advanced. A Polish airline was hit by a cyber-attack grounding around 1400 planes. There was never any danger to passengers because the attacks happened while no planes were in the air. However, the company says that the hack could happen to anyone, at any time making this a worldwide issue.

Wiktor Szary and Eric Auchard, Reuters.com

 

Details and insight for VARs: Medical Devices and Security Risk

If you liked last week's blog about the unique challenges facing healthcare today, then you'll love this look into how medical devices are becoming "key pivot points" in the war against hackers and cyberattacks.

Megan Williams, Business Solutions- bsminfo.com

 

The great debate: To BYOD or not to BYOD

Do you BYOD? As if security wasn't already difficult enough to control within your network and its devices, now security teams have to worry about the exponential threat of “bringing your own device”. This article gives 8 best practices for BYOD security and an insightful look at this new challenge.

Keith Poyster, ITPortal.com 

blog.courion.com

Matthew Gertner - AllPeersHow to Have Fun while Exploring Paris Alone [Technorati links]

June 30, 2015 10:05 AM
Paris, or more commonly known as the “City of Love,” is a destination that a lot of couples dream of exploring. The city has that dazzling beauty and teeming with – qualities that are hard to find in other modern cities. If you have a passion for culture, history, fine dining, and shopping, Paris will surely captivate your fancy.
Paris_Arc_de_Triomphe

However, just because it is known as the “City of Love” doesn’t mean that you can’t enjoy its beauty on your own. Sometimes, having a bit of “me time” can work wonders for your mind and body. This article will provide you with useful tips that’ll make your holiday in Paris more exciting, even if you are travelling on your own.

1. Pack light

 In order to have a fun adventure in Paris, make sure that you pack light. Ideally, you should bring sweatshirts and jeans to make you more comfortable. But of course, don’t forget to bring essentials like passports, toiletries, and spare clothes during the trip. Packing less makes it easier for you to move around the city.

2. Try local cuisine

Delicious and affordable meals can always be found in the city’s markets and cafes. Dining in an outdoor café while reading a book or observing other people are great ways to pass the time. Who knows, you might even meet a new friend. The good thing about Paris is that you can easily find budget-friendly diners, even if you are in the luxurious region of the city.

 3. Stay safe and be observant

Paris is one of the safest cities in the world. However, it can still be helpful if you take extra caution and be more observant with your surroundings. Always stay in well-lit areas if you are walking on your own at night. Be mindful of street scams and research where they usually occur. Before leaving your home, make sure that you communicate with your relatives so they know where you’ve been.

4. Nightlife

In Paris, you don’t really have to visit the glitzy clubs just to have fun. You can actually hunker down in one a bar or café and just enjoy a glass of wine. Verjus, for instance, is an excellent wine bar for making new friends.

5. Visit a museum

The city can usually be filled with a lot of people. So if you want some peace and quiet, you can head down the museums. As mentioned earlier, the city is rich in culture and art, so you can really view a lot of stunning paintings and sculptures in each museum.

Feel free to share other tips in the comments section!

The post How to Have Fun while Exploring Paris Alone appeared first on All Peers.

June 29, 2015

Julian BondMy first public post in G+. 8 July 2011, a week or so after the launch. [Technorati links]

June 29, 2015 07:14 AM
My first public post in G+. 8 July 2011, a week or so after the launch.
https://plus.google.com/+JulianBond23/posts/1qBXL7B367V

I've been searching for something in G+ that let's me see all the posts I've commented on. I can't find it yet.

4 years later. There's still no way of getting a list of all the posts you've commented on.
 I've been searching for something in G+ that let's me see all the posts I've… »
I've been searching for something in G+ that let's me see all the posts I've commented on. I can't find it yet.

In the process I discovered a whole set… - Julian Bond - Google+

[from: Google+ Posts]

KatasoftHello, Stormpath! [Technorati links]

June 29, 2015 05:00 AM

micah_hair

I’m Micah and this week I joined Stormpath as a Developer Evangelist, supporting Java and the JVM.

In this new role, I get to do some of my most favorite activities as my job: coding, pairing with other developers and writing. I am part of a growing team of software engineers who not only write code, but get to express all that nerdy goodness through interactions in the developer community.

About me

I developed an interest in computers right at the beginning of the personal computer revolution when I was in 6th grade. I first played with CBM PETs in school (pop quiz: What does PET stand for? No googling! Answer below). My first home computer was a Commodore Vic-20. Then a Commodore 64 and even the rare SX-64 (LOAD"*",8,1 – anyone?).

computers

After learning what I was doing with my 300 baud modem and phreaking tools, my parents sought a less felonious outlet for my interest. My father, a dentist, purchased an Osbourne 1 (CP/M for the win!) and had me help him automate his office.

Since then, my love affair with technology has continued to develop and evolve.

I’ve had a wide ranging career working at the Syfy Channel for its first online presence, large banks and insurance companies including JP Morgan Chase and Metlife, and startups.

The two primary themes throughout have been my love of APIs and information security. My recent code kata on spreadsheet column name conversion exercises both algorithmic thinking and creating APIs.

I’m very proud of my password generator program, Passable, both as an iOS app (sorry Android!) and as a Google Chrome extension.

The code for these projects (and others) are on my github page.

I’m a maker at heart, whether it’s refurbishing a Dark Tower game or building out a MAME arcade cabinet.

mame

Jumping at Stormpath

While I love writing, my last blog post on my personal site was back in 2013. I co-authored a book in 2006. I published articles in the 2000’s in Java Developer’s Journal and Dr. Dobbs, among others.

When I saw the Developer Evangelist position at Stormpath, I jumped at it! Getting to work with some of the top security people in the world, engaging with other developers, developing software to support other developers and writing are all baked into the work. Somebody pinch me – I must be dreaming!

What I’ll Be Working On

I’ll be focusing on making the experience of using Java and Stormpath more awesomer (technical term) than it already is. It’s an exciting time to be working in Java. The addition of a functional style of programming to the language, lambdas and the stream api have totally revitalized the Java language and community. The breadth of frameworks is remarkable, and you can now setup a fully functional MVC, single page, websocket enabled app in seconds using tools like Spring Boot or Dropwizard. Never before has Java been so accessible and I am excited to work with you on integrating Stormpath into your stack.

I am so psyched that Stormpath exists! The audacity to take on such a critical facet of the technology landscape makes me excited and a little queasy at the same time (in a good way).

How I See Stormpath

For you Java #nerds out there, Stormpath is a little like spinning up your own threads. Sure, the language has syntax to do it, but the container does it much better and safer than you or I ever could. So, I’ll rely on my handy container to manage concurrency and I will focus on the task at hand.

Likewise, while you could roll your own security layer and (hopefully) use best-practices, Stormpath does it better. All you need to do is break out your handy REST API skills and you’re all set.

Part of my role here – and my passion – is to be here for YOU, the Software Engineer. Feel free to reach out to me at: micah@stormpath.com. I am looking forward to meeting online and in person!

BTW, PET = Personal Electronic Transactor

June 27, 2015

Julian BondThe dubstep pioneers talking about how it happened. [Technorati links]

June 27, 2015 09:58 PM
The dubstep pioneers talking about how it happened.

http://www.vice.com/en_uk/read/an-oral-history-of-dubstep-vice-lauren-martin-610/page/0
vs
http://energyflashbysimonreynolds.blogspot.co.uk/2015/06/dubstep-in-hindsight.html

Well it's History now.... 15 years since its earliest dawn-glimmers....seven years or so since it got hijacked  / "went down the wrong path"  - leaving the faithful bereft, making them disperse, or launch the postdubstep era

---

If you're interested, that oral history is well worth reading. 
 The VICE Oral History of Dubstep | VICE | United Kingdom »
The story of a genre, as told by some of its most pivotal players.

[from: Google+ Posts]

Julian BondThe Essential Guide to Cyberpunk [Technorati links]

June 27, 2015 08:28 AM
The Essential Guide to Cyberpunk
http://io9.com/the-essential-cyberpunk-reading-list-1714180001

It's not a bad starting point and you could do worse than read everything any of those authors have ever written. Or just start with Mirrorshades and follow the links to each author. But also these books and everything by these people as well.

Paul Di Filippo - Ribofunk
Lucius Shephard - Life During Wartime
Michael Swanwick - Vacuum Flowers
Mink Mole - Alligator Alley
Jeff Noon - Needle in the Groove
Walter Jon Williams - Hardwired
Ken Macleod - The Execution Channel
Jack Womack - Random Acts of Senseless Violence
Samuel Delaney - Dhalgren
Ian McDonald - The Dervish House
Misha - Red Spider White Web
Martin Bax - The Hospital Ship
Paolo Bacigalupi - The Windup Girl
St. Jude (R.U.Sirius, Mondo2000) - Cyberpunk handbook : the real cyberpunk fakebook

http://www.librarything.com/tag/cyberpunk

Back in the 80s and 90s I read everything cyberpunk I could find. My tastes veered off though towards Slipstream and the sort of cross breed between Cyberpunk, Slipstream, Magical Realism and the late 60s New worlds crew like JG Ballard. Part of the reason for that is that the writing is generally better. It's a common criticism of early books by new SciFi authors that the writing is often terrible even while the ideas are interesting.

What's a bit sad is how much of that stuff is getting really hard to find now, long since out of print and pulped. Anyone got a copy of Lewis Shiner - Deserted Cities of the Heart?
 The Essential Cyberpunk Reading List »
It’s now been over three decades since cyberpunk first exploded, and in that time we’ve seen gorgeous movies, read fascinating books, and seen dozens of offshoots like steampunk (and my new favorite, deco punk) develop. Here are the 21 cyberpunk books you absolutely must read.

[from: Google+ Posts]
June 25, 2015

GluuAnnouncing the formation of the OTTO WG [Technorati links]

June 25, 2015 07:47 PM

Open Trust Taxonomy for OAuth2

Note: This announcement originally appeared on the Kantara Initiative website.

We are pleased to announce the formation of the OTTO WG! OTTO stands for Open Trust Taxonomy for OAuth2. We hope that you will participate in this innovative new work group!

The working group will develop the basic structures needed for the creation of multi-party federations between OAuth2 entities. The intent is to create a foundation of trust and drive down the cost of collaboration by publishing technical and legal information. These structures will include the set of APIs and related data structures enabling an OAuth entity to manage which entities it trusts and for other OAuth entities to discover members of the federation and details of the services.

The Work Group is necessary to bring together collaborators from existing SAML federations and the OAuth community to collaborate on a draft solution that meets their shared goals in this area and takes into account lessons learned from the past ten years of SAML.

Specifically, this Work Group is responsible for:

The APIs and data structures will enable discovery of the members of the federation and details about their services, key material and technical capabilities. The final scope will be refined after consideration of the use cases.

Existing SAML Federation XML structures will inform this work, but the data structures will not be expressed in XML but in JSON. The functions supported in existing SAML federations should be supported. Additionally, support for a more efficient and scalable discovery process and dynamic integration process will be considered.

Welcome OTTO WG to our community!

You can learn more about the motivation behind the formation of this working group here.

KatasoftBuild An API Service in Node.js with Stormpath, Twilio and Stripe [Technorati links]

June 25, 2015 05:00 PM

BTC SMS Intro

Building a full-fledged API service isn’t as hard as you may think. By taking advantage of some really useful API services and open source libraries, you can rapidly develop an API service in an incredibly short amount of time!

In this article, I’m going to walk you through the process of building an API service that uses SMS to keep you up-to-date with current value of Bitcoin: Bitcoin SMS!

This API service:

If you’re at all interested in building API services, or API companies — this article is meant specifically for you!

NOTE: Prefer video over a long article? If so, you can actually watch my screencast where I cover the same things in video form on Youtube here: https://www.youtube.com/watch?v=THDPG2gH7o0

ALSO: all of the code we’ll be writing here can be found on Github here: https://github.com/rdegges/btc-sms

What We’re Building: BitCoin SMS API Service

BTC SMS Demo

What we’re going to build today is a publicly available API service called BTC SMS which allows developers to make API calls that send SMS messages to specified phone numbers with the current value of Bitcoin.

The idea is that a lot of people might want to know the value of Bitcoin at a specific point in time — in order to buy or sell their Bitcoin — so this API service makes that information easy to access by sending SMS messages to a user with the current value of Bitcoin.

The API service we’re building will allow developers to make requests like this:

POST /api/message
{
  "phoneNumber": "+18182223333"
}

In the above example, this request would kick off an SMS message to +18182223333 that says something along the lines of “1 Bitcoin is currently worth $525.00 USD.”

Here’s how it’ll look:

BTC SMS API Call

Now, since we’re actually building an entire API service here — and not just a plain old API, we’ll also be charging money! We’ll be charging developers 2 cents per successful API request in order to cover our costs, and make a little bit of profit =)

So, now that we’ve briefly discussed what we’re going to be making — let’s actually make it!

Set Up Twilio, Stripe and Stormpath For Your API

To get started, you’ll need to create some accounts that we’ll be using for the rest of this guide.

Twilio for SMS

First, you’ll want to go and create an account with Twilio. Twilio is an API service that lets you do all things telephony related: send and receive calls and SMS messages.

For the purposes of the application we’re building, we’ll only be using Twilio to send SMS messages, but there’s a lot more it can do.

Signing up for a Twilio account is free, but if you want to actually follow through with the rest of this article, you’ll need to purchase a phone number you can send SMS messages from — and this typically costs $1 USD per month + 1c per SMS message you send.

Here’s what you’ll want to do:

Twilio Buy a Number

Twilio API Credentials

Stripe for Payments

Next, let’s create a Stripe account. Stripe is a payments provider that allows you accept Credit Cards, Debit Cards, and even Bitcoin as payment options on your site.

Since this demo application will be charging users money for the amount of API calls they make, this is necessary.

Once you’ve created your Stripe account, you’ll want to set your Stripe dashboard to TEST mode — this lets you view the development mode sandbox where you can use Stripe like normal, but with fake credit cards and money:

Stripe Test Dashboard

Next, you’ll want to visit your Stripe API Keys page to view your Stripe API keys. The ones you’ll want to use for testing are the Test Secret Key and Test Publishable Key values:

Stripe API Keys Image

Be sure to take note of both those API key values — we’ll be needing those later.

Stormpath for Authentication and API Key Management

Now that we’ve got both Twilio and Stripe setup, go create a Stormpath account. Stormpath is a free API service that lets you store user accounts and user data. It makes doing stuff like signing users up for your site, managing users, and doing things like password reset and authorization really easy.

Instead of needing to run a database to store your user data in, we can use Stormpath to simplify and speed up the process. It also comes with some nice pre-built login / registration / password reset pages that make things really nice.

Once you’ve signed up for Stormpath, you’ll want to create a new API key and download it locally:

Stormpath Provision API Key

When you generate a new Stormpath API key, you’ll automatically download an apiKey.properties file — this contains your API key information. This file contains two values: an API Key ID and an API Key Secret — we’ll need both of these later.

Next, you’ll need to create a new Stormpath Application. Generally, you’ll want to create one Application per project you work on — since we’re building this BTC SMS project, we’ll create an Application called BTC SMS:

Stormpath Create Application

After creating your Application, be sure to copy down the REST URL link — we’ll be needing this later on to reference our Application when we start coding =)

Lastly, you’ll want to navigate to your Stormpath Directories Page, click on your Directory, then go to the Workflows tab and explicitly enable the Verification Workflow.

Here’s how it works:

Stormpath Verification Workflow

That’s it! Stormpath will handle the rest =)

Bootstrapping an Express.js Application

Now that we’ve gotten all of our required API service accounts setup and configured properly, we’re ready to start writing some code!

The first thing we’ll need to do is create a minimal Express.js application that we can use as we move forward.

Here’s the files / folders we’ll be creating, along with a brief description of what each of them contains:

btc-sms
├── bower.json
├── index.js
├── package.json
├── routes
│   ├── api.js
│   ├── private.js
│   └── public.js
├── static
│   └── css
│       └── main.css
└── views
    ├── base.jade
    ├── dashboard.jade
    ├── docs.jade
    ├── index.jade
    └── pricing.jade

4 directories, 12 files

The Views

Now that we’ve seen what our app looks like at a structural level, let’s take a look at the views.

Taking a look at the views first will give you a good understanding of how the site looks / functions before digging into the backend code.

base.jade

The base.jade view contains a page outline and navbar that all pages of the site use. This lets us build a ‘modular’ website with regards to the front-end of the website:

block vars

doctype html
html(lang='en')
  head
    meta(charset='utf-8')
    meta(http-equiv='X-UA-Compatible', content='IE=edge')
    meta(name='viewport', content='width=device-width, initial-scale=1')
    title #{siteTitle} - #{title}
    link(href='/static/bootswatch/sandstone/bootstrap.min.css', rel='stylesheet')
    link(href='/static/css/main.css', rel='stylesheet')
    <!--[if lt IE 9]>
    script(src='/static/html5shiv/dist/html5shiv.min.js')
    script(src='/static/respond/dest/respond.min.js')
    <![endif]-->
  body
    nav.navbar.navbar-default.navbar-static-top
      - var nav = {}; nav[title] = 'active'
      .container
        .navbar-header
          button.navbar-toggle.collapsed(type='button', data-toggle='collapse', data-target='#navbar-collapse')
            span.sr-only Toggle navigation
            span.icon-bar
            span.icon-bar
            span.icon-bar
          a.navbar-brand(href='/') #{siteTitle}
        #navbar-collapse.collapse.navbar-collapse
          ul.nav.navbar-nav
            li(class='#{nav.Home}')
              a(href='/') Home
            li(class='#{nav.Pricing}')
              a(href='/pricing') Pricing
            li(class='#{nav.Docs}')
              a(href='/docs') Docs
            if user
              li(class='#{nav.Dashboard}')
                a(href='/dashboard') Dashboard
              li(class='#{nav.Logout}')
                a(href='/logout') Logout
            else
              li(class='#{nav.Login}')
                a(href='/login') Login
              li(class='#{nav.Register} create-account')
                a(href='/register') Create Account
    block body
    script(src='/static/jquery/dist/jquery.min.js')
    script(src='/static/bootstrap/dist/js/bootstrap.min.js')

Some important things to take note of:

index.jade

Our index.jade template renders the home page of our site — it’s just a simple static page:

extends base

block vars
  - var title = 'Home'

block body
  .container.index
    h1.text-center Get BTC Rates via SMS
    .row
      .col-xs-12.col-md-offset-2.col-md-8
        .jumbotron.text-justify
          p.
            #{siteTitle} makes it easy to track the value of Bitcoin via SMS.
            Each time you hit the API service, we'll SMS you the current Bitcoin
            price in a user-friendly way.
          a(href='/register')
            button.btn.btn-lg.btn-primary.center-block(type='button') Get Started!

You’ll notice that our Get Started! button is linking to a registration page — this registration page is generated automatically by the Stormpath library that you’ll see later on.

docs.jade

The docs.jade template is just a static page that contains API documentation for developers visiting the site:

extends base

block vars
  - var title = 'Docs'

block body
  .container.docs
    h1.text-center API Documentation
    .row
      .col-xs-12.col-md-offset-2.col-md-8
        p.text-justify
          i.
            This page contains the documentation for this API service.  There is
            only a single API endpoint available right now, so this document is
            fairly short.
        p.text-justify
          i.
            Questions? Please email <a href="mailto:support@apiservice.com">support@apiservice.com</a>
            for help!
        h2 REST Endpoints
        h3 POST /api/message
        span Description
        p.description.
          This API endpoint takes in a phone number, and sends this phone an
          SMS message with the current Bitcoin exchange rate.
        span Input
        .table-box
          table.table.table-bordered
            thead
              tr
                th Field
                th Type
                th Required
            tbody
              tr
                td phoneNumber
                td String
                td true
        span Success Output
        .table-box
          table.table.table-bordered
            thead
              tr
                th Field
                th Type
                th Example
            tbody
              tr
                td phoneNumber
                td String
                td "+18182223333"
              tr
                td message
                td String
                td "1 Bitcoin is currently worth $225.42 USD."
              tr
                td cost
                td Integer
                td #{costPerQuery}
        span Failure Output
        .table-box
          table.table.table-bordered
            thead
              tr
                th Field
                th Type
                th Example
            tbody
              tr
                td error
                td String
                td "We couldn't send the SMS message. Try again soon!"
        span Example Request
        pre.
          $ curl -X POST \
              --user 'id:secret' \
              --data '{"phoneNumber": "+18182223333"}' \
              -H 'Content-Type: application/json' \
              'http://apiservice.com/api/message'

pricing.jade

Like our docs page — the pricing.jade page is just a static page that tells users how much our service costs to use:

extends base

block vars
  - var title = 'Pricing'

block body
  .container.pricing
    h1.text-center Pricing
    .row
      .col-xs-offset-2.col-xs-8.col-md-offset-4.col-md-4.price-box.text-center
        h2 #{costPerQuery}&cent; / query
        p.text-justify.
          We believe in simple pricing.  Everyone pays the same usage-based
          feeds regardless of size.
        p.text-justify.end.
          <i>Regardless of how many requests you make, BTC exchange rates are
          updated once per hour.</i>
    .row
      .col-xs-offset-2.col-xs-8.col-md-offset-4.col-md-4
        a(href='/register')
          button.btn.btn-lg.btn-primary.center-block(type='button') Get Started!

dashboard.jade

The dashboard.jade file is where users land once they’ve either created or logged into an account.

This page does a few things:

The way we’re accepting billing information on this page is via the Stripe Checkout Button. To learn more about how this works, you can visit the Stripe site.

What happens is essentially this: if a user clicks the Stripe button, a Javascript popup will appear to collect the user’s payment information.

When the user is done entering their information, this credit card info will be validated by Stripe, and a unique token will be generated to allow us to bill this user later on.

Here’s the dashboard code:

extends base

block vars
  - var title = 'Dashboard'

block body
  .container.dashboard
    .row.api-keys
      ul.list-group
        .col-xs-offset-1.col-xs-10
          li.list-group-item.api-key-container
            .left
              strong API Key ID:
              span.api-key-id #{user.apiKeys.items[0].id}
            .right
              strong API Key Secret:
              span.api-key-secret #{user.apiKeys.items[0].secret}
    .row.widgets
      .col-md-offset-1.col-md-5
        .panel.panel-primary
          .panel-heading.text-center
            h3.panel-title Analytics
          .analytics-content.text-center
            span.total-queries #{user.customData.totalQueries}
            br
            span
              i.
                *total queries
      .col-md-5
        .panel.panel-primary
          .panel-heading.text-center
            h3.panel-title Billing
          .billing-content.text-center
            span.account-balance $#{(user.customData.balance / 100).toFixed(2)}
            br
            span
              i.
                *current account balance
            form(action='/dashboard/charge', method='POST')
              script.stripe-button(
                src = 'https://checkout.stripe.com/checkout.js',
                data-email = '#{user.email}',
                data-key = '#{stripePublishableKey}',
                data-name = '#{siteTitle}',
                data-amount = '2000',
                data-allow-remember-me = 'false'
              )

Static Assets

Now that we’ve taken a quick look at the views, let’s take a quick look at the static assets we’ll be using.

In our case, since there’s not a lot of styling done here — we’ve only got a single CSS file:

/*
 * Navbar settings.
 */
ul.nav.navbar-nav {
  float: right;
}

li.create-account > a {
  color: #fff !important;
}

/*
 * Index page settings.
 */
.index h1 {
  margin-top: 2em;
}

.index .jumbotron {
  margin-top: 4em;
}

.index button {
  margin-top: 4em;
  font-size: 1em;
}

/*
 * Dashboard page settings.
 */
.dashboard .api-keys {
  margin-top: 3em;
}

.dashboard .api-key-container {
  min-height: 4em;
}

.dashboard .widgets {
  margin-top: 4em;
}

.dashboard .api-key-secret {
  color: red;
}

.dashboard h3 {
  font-size: 1.2em !important;
}

.dashboard span.api-key-id, .dashboard span.api-key-secret {
  font-family: "Lucida Console", Monaco, monospace;
  margin-left: .5em;
}

.dashboard .left {
  float: left;
}

.dashboard .right {
  float: right;
}

.dashboard .panel {
  padding-bottom: 2em;
}

.dashboard .panel-heading {
  margin-bottom: 2em;
}

.dashboard .analytics-content, .dashboard .billing-content {
  padding-left: 2em;
  padding-right: 2em;
}

.dashboard .account-balance, .dashboard .total-queries {
  font-size: 2em;
}

.dashboard form {
  margin-top: 2em;
}

/*
 * Pricing page settings.
 */
.pricing .price-box {
  border: 2px solid #f8f5f0;
  border-radius: 6px;
  margin-top: 4em;
  margin-bottom: 4em;
}

.pricing h2 {
  margin-bottom: 1em;
}

.pricing .end {
  margin-bottom: 2em;
}

/*
 * Documentation page settings.
 */
.docs h1 {
  margin-bottom: 2em;
}

.docs h2 {
  margin-top: 2em;
  margin-bottom: 2em;
}

.docs h3 {
  /*padding-left: 2em;*/
  font-weight: bold;
}

.docs span {
  font-size: 1.2em;
  padding-left: 2.7em;
  font-weight: bold;
  margin-top: 1em;
  margin-bottom: .5em;
  display: block;
}

.docs .description {
  font-size: 1.2em;
  padding-left: 2.6em;
}

.docs .table-box {
  padding-left: 3em !important;
  margin-top: 1em !important;
}

.docs pre {
  margin-left: 3em;
}

package.json and bower.json

Now, let’s get into some real code!

Below is the package.json file that declares all of our Node.js dependencies, and makes installing this application simple:

{
  "name": "api-service-starter",
  "version": "0.0.0",
  "description": "An API service starter kit for Node.",
  "main": "index.js",
  "scripts": {
    "test": "echo \"Error: no test specified\" && exit 1"
  },
  "keywords": [
    "api",
    "service",
    "starter",
    "kit"
  ],
  "author": "Randall Degges",
  "license": "UNLICENSE",
  "dependencies": {
    "async": "^0.9.0",
    "body-parser": "^1.12.3",
    "express": "^4.12.3",
    "express-stormpath": "^1.0.4",
    "jade": "^1.9.2",
    "request": "^2.55.0",
    "stripe": "^3.3.4",
    "twilio": "^2.0.0"
  }
}

To install this project, you can simply run $ npm install from the command line — this will automatically download all Node dependencies for ya =)

Likewise, you can also use bower to automatically download and install all front-end dependencies via $ bower install. The bower.json file makes this possible:

{
  "name": "api-service-starter",
  "main": "index.js",
  "version": "0.0.0",
  "authors": [
    "Randall Degges <r@rdegges.com>"
  ],
  "description": "An API service starter kit for Node.",
  "keywords": [
    "api",
    "service",
    "starter",
    "kit"
  ],
  "license": "UNLICENSE",
  "private": true,
  "ignore": [
    "**/.*",
    "node_modules",
    "bower_components",
    "test",
    "tests"
  ],
  "dependencies": {
    "jquery": "~2.1.3",
    "bootstrap": "~3.3.4",
    "respond": "~1.4.2",
    "html5shiv": "~3.7.2",
    "bootswatch": "~3.3.4+1"
  }
}

Application Setup

Now that we’ve covered the basics, let’s take a look at what makes our project tick: the index.js file. This holds the main Express.js web application, configures our libraries, and initializes our web server:

'use strict';

var async = require('async');
var express = require('express');
var stormpath = require('express-stormpath');

var apiRoutes = require('./routes/api');
var privateRoutes = require('./routes/private');
var publicRoutes = require('./routes/public');

// Globals
var app = express();

// Application settings
app.set('view engine', 'jade');
app.set('views', './views');

app.locals.costPerQuery = parseInt(process.env.COST_PER_QUERY);
app.locals.siteTitle = 'BTC SMS';
app.locals.stripePublishableKey = process.env.STRIPE_PUBLISHABLE_KEY;

// Middlewares
app.use('/static', express.static('./static', {
  index: false,
  redirect: false
}));
app.use('/static', express.static('./bower_components', {
  index: false,
  redirect: false
}));
app.use(stormpath.init(app, {
  enableAccountVerification: true,
  expandApiKeys: true,
  expandCustomData: true,
  redirectUrl: '/dashboard',
  secretKey: 'blah',
  postRegistrationHandler: function(account, req, res, next) {
    async.parallel([
      // Set the user's default settings.
      function(cb) {
        account.customData.balance = 0;
        account.customData.totalQueries = 0;
        account.customData.save(function(err) {
          if (err) return cb(err);
          cb();
        });
      },
      // Create an API key for this user.
      function(cb) {
        account.createApiKey(function(err, key) {
          if (err) return cb(err);
          cb();
        });
      }
    ], function(err) {
      if (err) return next(err);
      next();
    });
  }
}));

// Routes
app.use('/', publicRoutes);
app.use('/api', stormpath.apiAuthenticationRequired, apiRoutes);
app.use('/dashboard', stormpath.loginRequired, privateRoutes);

// Server
app.listen(process.env.PORT || 3000);

The first thing we’ll do is import all of the libraries necessary, as well as our route code (which we’ll hook up in a bit).

The next thing we’ll do is define our Express.js application, and tell is that we’re going to be using the Jade template language for our view code:

var app = express();

// Application settings
app.set('view engine', 'jade');
app.set('views', './views');

Once that’s been done, we’ll initialize some global settings:

app.locals.costPerQuery = parseInt(process.env.COST_PER_QUERY);
app.locals.siteTitle = 'BTC SMS';
app.locals.stripePublishableKey = process.env.STRIPE_PUBLISHABLE_KEY;

The COST_PER_QUERY and STRIPE_PUBLISHABLE_KEY values are being pulled out of environment variables. Instead of hard-coding your credentials into your source code, storing them in environmental variables is typically a better thing to do as you don’t need to worry about accidentally exposing your credentials.

The COST_PER_QUERY environment variable tells our app how many cents we should charge for each successful API request — in our case, we’ll set this to 2.

The STRIPE_PUBLISHABLE_KEY environment variable should be set to our Stripe Publishable Key that we retrieved earlier on when we created a Stripe account.

Here’s an example of how you might set these variables from the command line:

$ export COST_PER_QUERY=2
$ export STRIPE_PUBLISHABLE_KEY=xxx

Next, we’ll use the express.static built-in middleware to properly serve our app’s static assets:

app.use('/static', express.static('./static', {
  index: false,
  redirect: false
}));
app.use('/static', express.static('./bower_components', {
  index: false,
  redirect: false
}));

And… After that, we’ll initialize the Stormpath library:

app.use(stormpath.init(app, {
  enableAccountVerification: true,
  expandApiKeys: true,
  expandCustomData: true,
  redirectUrl: '/dashboard',
  secretKey: 'blah',
  postRegistrationHandler: function(account, req, res, next) {
    async.parallel([
      // Set the user's default settings.
      function(cb) {
        account.customData.balance = 0;
        account.customData.totalQueries = 0;
        account.customData.save(function(err) {
          if (err) return cb(err);
          cb();
        });
      },
      // Create an API key for this user.
      function(cb) {
        account.createApiKey(function(err, key) {
          if (err) return cb(err);
          cb();
        });
      }
    ], function(err) {
      if (err) return next(err);
      next();
    });
  }
}));

The express-stormpath library makes securing our website really easy.

I’ll cover the different options, and what they do below.

Now that we’ve configured all our middleware, the last thing we need to do is include our route code:

app.use('/', publicRoutes);
app.use('/api', stormpath.apiAuthenticationRequired, apiRoutes);
app.use('/dashboard', stormpath.loginRequired, privateRoutes);

The way this works is like so:

The Stormapth middlewares included here automatically handle all of the authentication logic for us 100%. If we try to access the /dashboard page without being logged into the website, for instance, we’ll be immediately redirected to the login page and forced to authenticate.

If we try to access an API route without using Basic Auth, we’ll get a 401 UNAUTHORIZED message with a nice JSON error.

The Routes

The main part of our application is the routes. This is where all the magic happens: billing, API code, SMS code, etc.

Let’s take a look at each route, and dissect how exactly they work.

public.js

First, let’s look at our public routes. These routes are responsible for serving our ‘public’ pages on the website:

'use strict';

var express = require('express');

// Globals
var router = express.Router();

// Routes
router.get('/', function(req, res) {
  res.render('index');
});

router.get('/pricing', function(req, res) {
  res.render('pricing');
});

router.get('/docs', function(req, res) {
  res.render('docs');
});

// Exports
module.exports = router;

As you can see, nothing is happening here except that we’re rendering our pre-defined Jade templates.

private.js

The private route file contains only a single route: our dashboard page. Because our BTC SMS app only has a single page for logged-in users (the dashboard) — this is where that logic is contained.

If we were building a larger site, which had many private pages that only logged in users could access, they’d be included here also:

'use strict';

var bodyParser = require('body-parser');
var express = require('express');
var stormpath = require('express-stormpath');
var stripe = require('stripe')(process.env.STRIPE_SECRET_KEY);

// Globals
var router = express.Router();

// Middlewares
router.use(bodyParser.urlencoded({ extended: true }));

// Routes
router.get('/', function(req, res) {
  res.render('dashboard');
});

router.post('/charge', function(req, res, next) {
  stripe.charges.create({
    amount: 2000,
    currency: 'usd',
    source: req.body.stripeToken,
    description: 'One time deposit for ' + req.user.email + '.'
  }, function(err, charge) {
    if (err) return next(err);
    req.user.customData.balance += charge.amount;
    req.user.customData.save(function(err) {
      if (err) return next(err);
      res.redirect('/dashboard');
    });
  });
});

// Exports
module.exports = router;

Let’s see how this works.

First, after creating an Express router, we’re using the bodyParser middleware to decode form data.

On this page, we’ll have a form that allows us to accept payment from a user, and because of this, we’ll need the ability to read the form data we’re receiving. This is what the bodyParser middleware is used for:

router.use(bodyParser.urlencoded({ extended: true  }));

This middleware let’s us access form data via req.body. So, for instance, if a form field called username was posted to us, we could access that data by saying req.body.username.

Next, we’ll register a router handler for the GET requests to our dashboard page:

router.get('/', function(req, res) {
  res.render('dashboard');
});

This code simply renders the dashboard page if a user visits the /dashboard URL in their browser.

Next, we’ll register a POST handler for the dashboard page:

router.post('/charge', function(req, res, next) {
  // stuff
});

This code will get run if a user attempts to deposit money into their account:

Stripe Deposit Money

What happens in our template code is all of the card collection and verification stuff. When we receive this POST request from the browser, what that means is that the user’s card is valid, and Stripe has given us permission to actually charge this user some money.

In our case, we’ll be charging users a flat fee of 20$.

Using the stripe library, we’ll then charge the user’s card:

stripe.charges.create({
  amount: 2000,
  currency: 'usd',
  source: req.body.stripeToken,
  description: 'One time deposit for ' + req.user.email + '.'
}, function(err, charge) {
  if (err) return next(err);
  req.user.customData.balance += charge.amount;
  req.user.customData.save(function(err) {
    if (err) return next(err);
    res.redirect('/dashboard');
  });
});

Once the user’s card has been successfully charged, we’ll also update the user account’s balance, so that we now know how much money this user has paid us.

And… That’s it for billing! Quite easy, right?

api.js

The last route we need to cover is the API route. Since our API service only has a single API call, this file only holds one API route. If we were building a more complex API service, however, this file might be a lot longer:

'use strict';

var bodyParser = require('body-parser');
var express = require('express');
var request = require('request');
var twilio = require('twilio')(process.env.TWILIO_ACCOUNT_SID, process.env.TWILIO_AUTH_TOKEN);

// Globals
var router = express.Router();
var BTC_EXCHANGE_RATE;
var COST_PER_QUERY = parseInt(process.env.COST_PER_QUERY);

// Middlewares
router.use(bodyParser.json());

// Routes
router.post('/message', function(req, res) {
  if (!req.body || !req.body.phoneNumber) {
    return res.status(400).json({ error: 'phoneNumber is required.' });
  } else if (!BTC_EXCHANGE_RATE) {
    return res.status(500).json({ error: "We're having trouble getting the exchange rates right now. Try again soon!" });
  } else if (req.user.customData.balance < COST_PER_QUERY) {
    return res.status(402).json({ error: 'Payment required. You need to deposit funds into your account.' });
  }

  var message = '1 Bitcoin is currently worth $' + BTC_EXCHANGE_RATE  + ' USD.';

  twilio.sendMessage({
    to: req.body.phoneNumber,
    from: process.env.TWILIO_PHONE_NUMBER,
    body: message
  }, function(err, resp) {
    if (err) return res.status(500).json({ error: "We couldn't send the SMS message. Try again soon!" });

    req.user.customData.balance -= COST_PER_QUERY;
    req.user.customData.totalQueries += 1;
    req.user.customData.save();

    res.json({ phoneNumber: req.body.phoneNumber, message: message, cost: COST_PER_QUERY });
  });
});

// Functions
function getExchangeRates() {
  request('http://api.bitcoincharts.com/v1/weighted_prices.json', function(err, resp, body) {
    if (err || resp.statusCode !== 200) {
      console.log('Failed to retrieve BTC exchange rates.');
      return;
    }

    try {
      var data = JSON.parse(body);
      BTC_EXCHANGE_RATE = data.USD['24h'];
      console.log('Updated BTC exchange rate: ' + BTC_EXCHANGE_RATE + '.');
    } catch (err) {
      console.log('Failed to parse BTC exchange rates.');
      return;
    }
  });
}

// Tasks
getExchangeRates();
setInterval(getExchangeRates, 60000);

// Exports
module.exports = router;

Like our private.js routes, we’ll also be using the bodyParser middleware here to read in API request data.

We’ll also be making use of the twilio library to send SMS messages to users, as well as the request library to fetch the current Bitcoin exchange rates from bitcoincharts.

The bitcoincharts site provides a publicly available API that lets you grab the current Bitcoin exchange rates. This is where we’ll be grabbing our Bitcoin value information from =) You can find more information on this here: http://api.bitcoincharts.com/v1/weighted_prices.json

So, once we’ve defined our Express router, the first thing we’ll do is declare some globals:

var BTC_EXCHANGE_RATE;
var COST_PER_QUERY = parseInt(process.env.COST_PER_QUERY);

The BTC_EXCHANGE_RATE variable will be set to the current value of Bitcoin in USD, and updated frequently. This is what we’ll use when we send out SMS messages to users.

The COST_PER_QUERY variable is the amount of money (in cents) that we’ll charge a user for each successful API request made.

Next, we’ll define a helper function called getExchangeRates which queries the bitcoin charts API service to find the current value of a Bitcoin:

function getExchangeRates() {
  request('http://api.bitcoincharts.com/v1/weighted_prices.json', function(err, resp, body) {
    if (err || resp.statusCode !== 200) {
      console.log('Failed to retrieve BTC exchange rates.');
      return;
    }

    try {
      var data = JSON.parse(body);
      BTC_EXCHANGE_RATE = data.USD['24h'];
      console.log('Updated BTC exchange rate: ' + BTC_EXCHANGE_RATE + '.');
    } catch (err) {
      console.log('Failed to parse BTC exchange rates.');
      return;
    }
  });
}

This function simply makes the request, then extracts the data. Finally it assigns the current value to the global BTC_EXCHANGE_RATE variable defined earlier.

After that’s done, we’ll invoke this function in two ways:

getExchangeRates();
setInterval(getExchangeRates, 60000);

First, we’ll call it immediately so that as soon as our program starts, we get the current BTC value.

Next, we’ll call it on a setInterval job, which executes once per hour (in milliseconds). This ensures that every hour we’ll update the BTC exchange rate to the latest values.

Finally, we’ll implement our API route /api/message, which is what developers will be using to send SMS messages with the current BTC exchange rate information:

router.post('/message', function(req, res) {
  if (!req.body || !req.body.phoneNumber) {
    return res.status(400).json({ error: 'phoneNumber is required.' });
  } else if (!BTC_EXCHANGE_RATE) {
    return res.status(500).json({ error: "We're having trouble getting the exchange rates right now. Try again soon!" });
  } else if (req.user.customData.balance < COST_PER_QUERY) {
    return res.status(402).json({ error: 'Payment required. You need to deposit funds into your account.' });
  }

  var message = '1 Bitcoin is currently worth $' + BTC_EXCHANGE_RATE  + ' USD.';

  twilio.sendMessage({
    to: req.body.phoneNumber,
    from: process.env.TWILIO_PHONE_NUMBER,
    body: message
  }, function(err, resp) {
    if (err) return res.status(500).json({ error: "We couldn't send the SMS message. Try again soon!" });

    req.user.customData.balance -= COST_PER_QUERY;
    req.user.customData.totalQueries += 1;
    req.user.customData.save();

    res.json({ phoneNumber: req.body.phoneNumber, message: message, cost: COST_PER_QUERY });
  });
});

This API route will:

Once we’ve done the error handling stuff, we’ll use the Twilio library to send an SMS message from our pre-purchased phone number (process.env.TWILIO_PHONE_NUMBER), with our pre-formatted message.

If, for any reason, the SMS message sending fails, we’ll return a 500 with an error message.

If the SMS message succeeds, we’ll subtract 2 cents from the user’s account balance, increment the user’s total queries counter, and then return a successful JSON response message.

It’s that simple!

Running the App

To run the app, as you saw through the code explanations, you’ll need to define some environment variables.

Here is a full list of the required environment variables you need to set to run this thing:

$ export COST_PER_QUERY=2
$ export STORMPATH_API_KEY_ID=xxx
$ export STORMPATH_API_KEY_SECRET=xxx
$ export STORMPATH_APPLICATION=https://api.stormpath.com/v1/applications/xxx
$ export STRIPE_SECRET_KEY=xxx
$ export STRIPE_PUBLISHABLE_KEY=xxx
$ export TWILIO_ACCOUNT_SID=xxx
$ export TWILIO_AUTH_TOKEN=xxx
$ export TWILIO_PHONE_NUMBER=+18882223333

These variables will be used automatically in the project code to make things work as needed.

Once these variables have been defined, you can then run your own instance of the BTC SMS app by saying:

$ node index.js

And then visiting http://localhost:3000.

To deposit money into your account using Stripe (in test mode), you can use the credit card number 424242424242, with any fake expiration date and CVC number. This will deposit funds into your account.

Lastly, to make successful API requests, you can use the cURL command line tool like so:

$ curl -v —user ‘API_KEY_ID:API_KEY_SECRET’ -H ‘Content-Type: application/json’ —data ‘{“phoneNumber”: “+18882223333”}’ ‘http://127.0.0.1:3000/api/message

Be sure to substitute in your own phone number and API credentials (taken from the BTC SMS dashboard page).

What Did We Learn?

Building a simple API service isn’t really all that hard. In just a few hours you can structure, plan, and implement a full-fledged API company with only a few small, free-to-use services.

The old days where launching a company took a lot of time and effort are long gone. Using API services to speed up your development can save you a bunch of time, effort, and problems.

I hope that this tutorial gave you a little bit of inspiration, taught you something new, and hopefully gave you some new ideas for your own cool projects.

Be sure to check out Stormpath, Twilio, and Stripe for your next projects =)

Oh — and if you have any questions, leave us a comment below!

PS: If you’re currently learning how to build API services and do stuff with Node.js, I’d recommend really writing this code out and playing around with it yourself. There’s no better way to learn this stuff than by messing around with it on your own =)

-Randall

ForgeRockJoining the ForgeRock Band [Technorati links]

June 25, 2015 02:58 PM

It’s been almost a decade since I had a “first week” at work and as I contemplated my first 5 days, I realize that it simply didn’t feel like a first week at all. I felt at home immediately.

From the beginning, Mike, our CEO, was clear about the culture at ForgeRock as I went through the interview process. You won’t be surprised to know that this was a significant part of my desire to get the job.

It’s a privilege to have the opportunity to be a part of the talented ForgeRock team that is passionate about enabling organizations to realize their digital transformation strategies. Every digital initiative requires identity. Whether it is an IoT, cloud, mobile, or enterprise initiative, identity is required.

I think that my favorite moment came at the Identity Summit last month.  I sat through the fourth customer presentation endorsing the ForgeRock Identity Platform™ and openly sharing their experience in front of their peers. I realized that I had rarely seen such unabridged, unprompted enthusiasm in my 25-year career in technology.

So to the founders of ForgeRock, its committed employees and the extended ForgeRock community, I’m honored to be joining the band and have an opportunity to serve the best interests of our customers and this amazing company.

For more, read the press release.

The post Joining the ForgeRock Band appeared first on Home - ForgeRock.com.

ForgeRockJoining the ForgeRock Band [Technorati links]

June 25, 2015 02:58 PM

It’s been almost a decade since I had a “first week” at work and as I contemplated my first 5 days, I realize that it simply didn’t feel like a first week at all. I felt at home immediately.

From the beginning, Mike, our CEO, was clear about the culture at ForgeRock as I went through the interview process. You won’t be surprised to know that this was a significant part of my desire to get the job.

It’s a privilege to have the opportunity to be a part of the talented ForgeRock team that is passionate about enabling organizations to realize their digital transformation strategies. Every digital initiative requires identity. Whether it is an IoT, cloud, mobile, or enterprise initiative, identity is required.

I think that my favorite moment came at the Identity Summit last month.  I sat through the fourth customer presentation endorsing the ForgeRock Identity Platform™ and openly sharing their experience in front of their peers. I realized that I had rarely seen such unabridged, unprompted enthusiasm in my 25-year career in technology.

So to the founders of ForgeRock, its committed employees and the extended ForgeRock community, I’m honored to be joining the band and have an opportunity to serve the best interests of our customers and this amazing company.

For more, read the press release.

The post Joining the ForgeRock Band appeared first on Home - ForgeRock.com.

CourionHealthcare's Unique Security Challenges [Technorati links]

June 25, 2015 12:30 PM

Access Risk Management Blog | Courion

In the past few weeks, the U.S. Government has repeatedly been in the news for its recent hack—allegedly by the Chinese—which leaked over four million personnel records. However, this wasn't the only group infiltrated by Chinese hackers in the past few months; According to the popular blog Mashable, over four million medical records were also stolen. This hack exemplifies a growing concern and a new set of challenges for healthcare organizations surrounding the use of digital records. Now that healthcare records are all digitized and shared over networks and multiple devices, these records have become very valuable to criminals while hospitals, clinics and other organizations are still trying to find the best way to protect them.

 Healthcare Data Security and Privacy

While the issues surrounding digital records and possible breaches are the most often reported, they are not the only challenge unique to healthcare organizations. Aside from keeping your records safe, organizations must concern themselves with personnel issues such as the need for multiple people to have access to records. Not only do doctors and nurses need access to patient records but now the billing department, insurance companies and regulatory committees do as well. Some of these positions can easily be credentialed with role based access; some of them are temporary employees or work across different functional areas and need access to different things at different times. It is hard for the organization to maintain proper access control and security with so many unique needs.

 

On top of the multiple user access requests are the multiple devices that the information needs to be available on. No longer are records and information kept behind the nurses’ station in folders or on desktops; now healthcare professionals are using multiple laptops, tablets, phones, and other mobile devices in their practices. The need to provision all of these devices for any new employee can take days—if not weeks—to get up and running. There is also the need to be able to remotely wipe access to all information if the device is lost or stolen. According to the most recent Healthcare breach Report from bitglass, 68 percent of all healthcare data breaches since 2010 were due to device theft or loss. It is extremely difficult to roll out a process that would cover all of these needs on so many different devices.

roadmap to healthcare hipaa and byod mobile security

 

One last issue highlighted in the news recently is the vulnerability of specialized medical equipment to be hacked. In another Mashable article, it is reported that drug pumps may be hackable in fatal ways because they enable a hacker to increase or decrease the dosage of drugs. One of the reasons it's so hard to regulate these devices is because they are on a closed loop and can't be easily scanned for malware. The IT department cannot add software because it is an FDA issue and therefore the hospital has a hard time monitoring. So how is the security team supposed to monitor devices that they do not have full access and transparency to? For that matter, how is one team going to maintain visibility into all of the moving pieces of infrastructure and personnel in their organization?

 medical equipment

The best way to mitigate these risks is to implement an Identity and Access Management (IAM) solution. These solutions are known to improve accuracy through their automated provisioning policies and are also instrumental in providing transparency into all access and credentials in an organization. An IAM program helps with personnel risk by giving role-based access and visibility into all roles and credentials of any individual. It will also automatically grant credentials to any new employee across all devices and will take away that access once he or she is terminated. This provisioning or de-provisioning can be done by any verified owner/administrator both on a desktop and on any mobile device, making the speed and scalability of the project fit to any organization's needs.

 

The risks for healthcare organizations will continue to grow as both the Internet of Things and the sophistication of hackers mature in the next few years. IAM solutions are driven by real-time data that allow you to make the most informed decisions possible. Imagine having information on what accounts were most at risk so that you could monitor the risk of data breaches; what if you could automatically wipe sensitive data from a laptop when your doctor forgot it on the plane? IAM solutions can allow you to mitigate these risks and give you visibility into your systems. While the risks and attacks will never stop coming for your organization, with IAM, you will have the ability to recognize these attacks sooner and fight back.

 

blog.courion.com

Kantara InitiativeMaciej Machulak – Innovators Under 35 [Technorati links]

June 25, 2015 08:17 AM
MaciejMachulak_1

Maciej Machulak – Innovator Under 35

This week we are extra proud of the work of one of our most key contributors – Maciej Machulak.  Recently Maciej was named one of 2015’s Innovators Under 35 for is work to develop UMA.

Maciej is the UMA WG vice-chair and in this role his commitment and passion has been integral to the success of the protocol development to date.  He, along with Eve Maler, have been key driving forces as leadership within the UMA WG. This leadership has inspired all of the UMAnitarians to come together for the purpose of providing users, governments, and enterprise the needed tools for a user-centric approach to resource sharing.

In an announcement from Innovators Under 35 Machulak explains, “UMA´s protocol is very flexible: for example you can choose to share a photograph not only with one specific person but also with anyone who will comply with the conditions imposed by the user, such as personal use only, or deleting the photo after a week and the authorizations manager can, in the event of an infraction, provide legal help to pursue anyone who has accessed the content and violated the conditions”.

We are so proud to have Maciej as a key part of our Kantara community and we hope you’ll join us in congratulating Maciej on his achievement. We can’t wait to see how much more you will accomplish! Keep that star shinning Maciej!!

– On the behalf of Kantara Initiative
Joni Brennan, Executive Director

June 24, 2015

Mike Jones - MicrosoftJWK Thumbprint -06 addressing SecDir review comments [Technorati links]

June 24, 2015 08:34 AM

IETF logoA new JWK Thumbprint draft has been posted addressing the IETF Security Directorate (SecDir) comments from Adam Montville. The changes clarify aspects of the selection and dissemination of the hash algorithm choice and update the instructions to the Designated Experts when registering JWK members and values.

The specification is available at:

An HTML formatted version is also available at:

June 23, 2015

CourionFrom Cyber Security to Phishing Attacks #TechTuesday Roundup [Technorati links]

June 23, 2015 12:34 PM

Access Risk Management Blog | Courion

open lock, close lock

A British firm now allows people log into their bank accounts using emojis-

Smiley face. Thumbs up. Is that a crab? The language of teens and tweens everywhere may soon be protecting your sensitive information. That’s right, a British firm is trying out emojis in passwords which they believe will lead to better security. The company claims that using emoji passwords is mathematically more secure. We'll let you decide for yourself.

Lucy England, BusinessInsider.com

 

Bad News! LastPass breached. Good news? You should be OK…

When the company that promises to keep your passwords safe and secure gets hacked, do you feel safe? The good news here is that because of their authentication data, it doesn’t look like the hackers were able to get into encrypted user data.

Paul Ducklin, NakedSecurity.com

@duckblog

 

Cardinals Investigated for Hacking Into Astros’ Database

We've grown used to seeing hacks on banks, retailers, and others with sensitive information that they can sell or share. However, hacking has now made its way to America's favorite pastime.  In the last week, the St. Louis Cardinals were accused of hacking into the Houston

Michael S. Schmidt, NewYorkTimes.com

 

Magazine Publisher Loses $1.5 Million in Phishing Attack

While we have all become savvy to the Nigerian Prince Email scam, there is a new phishing attack on the horizon and it’s coming from the inside. Bonnier Publications was the target of an attack that cost them $1.5 million in transfers. Hackers accessed credentials for a previous CEO and used his email account to order accounts payable to electronically transfer $3 Million to a Chinese bank. Luckily the publisher caught on before the second payment was due.

Ashley Carman, SCMagazine.com

 

Has your Samsung phone been hacked?

Another mobile hack? That’s right, it was reported this week that possibly 600 million handsets are vulnerable to an attack that allows hackers take photos and read texts on your phone. Users are being urged to stay away from unsecure Wi-Fi networks until the bug is fixed. No word yet on if you can use this as an excuse for all of those selfies.

Sarah Griffiths, MailOnline

blog.courion.com

Mark Dixon - OracleNo, I don’t want to engage! [Technorati links]

June 23, 2015 03:05 AM

Do you ever wonder why in the world you receive the ads you do on Facebook or other online venues? Methinks personalized, targeted advertising still has a long way to go.

Marketoonist 150622 engage

June 22, 2015

Kaliya Hamlin - Identity WomanInternet Identity Workshop #21 Registration is open [Technorati links]

June 22, 2015 10:09 PM

Here is the registration for the 21st Internet Identity Workshop.
Join us its going to be great.

Powered by Eventbrite

KatasoftCreate and Verify JWTs with Node.js [Technorati links]

June 22, 2015 07:00 AM

JWT, access token, token, OAuth token.. what does it all mean??

Properly known as “JSON Web Tokens”, JWTs are a fairly new player in the authentication space. Being the cool new thing, everyone is hip to start using them. But are you doing it securely? In this article we’ll discuss best practices for JWTs, while showing you how to use the nJwt library for creating and verifying JWTs in your Node.js application.

What is a JSON Web Token (JWT)?

In a nutshell, a JWT is an object that can tell you things about a user and what they’re allowed to do. JWTs are meant to be issued by a trusted authority and given to a user. Typically this means your server is creating the JWT and sending it to your user’s web browser or mobile device for safe keeping.

JWTs can be digitally signed with a secret key. Doing so allows you to assert that a token was issued by your server and was not maliciously modified.

When the token is signed, it is “stateless”: this means you don’t need any extra information, other than the secret key, to verify that the information in the token is “true”. This great feature allows you to remove that pesky session table in your database.

When Should I Use Them?

JWTs are typically used to replace session identifiers. For example: if you’re using a session system which stores an opaque ID on the client in a cookie while also maintaining session in a database for hat ID. With JWTs you can replace both the session data and the opaque ID.

You’ll still use a cookie to store the access token, but you need to make sure you secure your cookies. For more information on that topic I’ll refer you to my other post, Build Secure User Interfaces Using JSON Web Tokens (JWTs).

With the token stored in a secure cookie, the user’s client will supply the token on every subsequent request to your server. This allows the server to authenticate the request, without having to ask for credentials a second time (until the token expires, that is).

How to Create a JWT

There are a few things you’ll need in order to create a JWT for a user, we’ll walk through each of these steps in detail:

  1. Generate the secret signing key
  2. Authenticate the user
  3. Prepare the claims
  4. Generate the token
  5. Send the token to the client

1. Generate the Secret Signing Key

To be secure, we want to sign our tokens with a secret signing key. This key should be kept confidential and only accessible to your server. It should be highly random and not guessable. In our example, we’ll use the node-uuid library to create a random key for us:

var secretKey = uuid.v4();

2. Authenticate the User

Before we can make claims about the user, we need to know who the user is. So the user needs to make an initial authentication request, typically by logging into your system by presenting a username and password in a form. It could also mean that they’ve presented an API key and secret to your API service, using something like the Authorization: Basic scheme.

In either situation, your server should verity the user’s credentials. After you’ve done this and obtained the user data from your system, you want to create a JWT which will “remember” the information about the user. We’ll put this information into the claims of the token.

3. Prepare The Claims

Now that we have the user data, we want to build the “claims” of the JWT. That will look like this:

var claims = {
  sub: 'user9876',
  iss: 'https://mytrustyapp.com',
  permissions: 'upload-photos'
}

Let’s discuss each of these fields. Technically speaking, you can create a JWT without any claims. But these three fields are the most common:

4. Generate the Token

Now that we have the claims and the signing key, we can create our JWT object:

var jwt = nJwt.create(claims,secretKey);

This will be our internal representation of the token, before we send it to the user. Let’s take a look at what’s inside of it:

console.log(jwt)

You will see an object structure which describes the header and the claims body of the token:

{
  header: {
    typ: 'JWT',
    alg: 'HS256'
  },
  body: {
    jti: '3ee9364e-8aca-4e39-8ba2-74e654c7e083',
    iat: 1434695471,
    exp: 1434699071,
    sub: 'user9876',
    iss: 'https://mytrustyapp.com',
    permissions: 'upload-photos'
  }
}

You’ll see the claims that you specified earlier, and many other properties. These are the secure defaults that our library is setting for you, let’s visit each one in detail:

5. Send the Token to the Client

Now that we have the JWT object, we can “compact” it to get the actual token, which will be a Base64 URL-Safe string that can be passed down to the client.

Simply call compact, and then take a look at the result:

var token = jwt.compact();
console.log(token);

What you see will look like this:

eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJqdGkiOiIyYWIzOWRhYS03ZGJhLTQxYTAtODhiYS00NGE2YmIyYjk3YWMiLCJpYXQiOjE0MzQ2OTY4MDEsImV4cCI6MTQzNDcwMDQwMX0.qRe18XcmNXB2Ily-U9dwF_8j9DuZOi35HJGppK4lpBw

This is the compact JWT, it’s a three-part string (separated by periods). It contains the encoded header, body, and signature.

How you send the token to the client will depend on the type of application you are working with. The most common use case is a login form on a traditional website. In that situation you will store the cookie in an HttpOnly cookie, so you can simply set the cookie on the POST response.

For example, if you’re using the cookies library for Express:

new Cookies(req,res).set('access_token',token,{
  httpOnly: true,
  secure: true      // for your production environment
});

Once the client has the token, it can use it for authentication. For example, if you’re building a single-page-app, the app will be making XHR requests of your server. When it does so, it will supply the cookie for authentication.

How to Verify JWTs

When a client has a token it will use it to authenticate the user. The token can be sent to your server in a cookie or an HTTP header, such as the Authorization: Bearer header.

For example, if it comes in as a cookie and you’re using the cookies library with your Express app, you could pull the token from the cookie like this:

var token = new Cookies(req,res).get('access_token');

Regardless of how the token comes in, it will be that same compacted string that you sent to the client. To verify the string, you simply need to pass it to the verify method in the library, along with the secret key that was used to sign the token:

var verifiedJwt = nJwt.verify(token,secretKey);

If the token is valid, you can log it to the console and see the same information that you put into it!

{
  header: {
    typ: 'JWT',
    alg: 'HS256'
  },
  body: {
    jti: '3ee9364e-8aca-4e39-8ba2-74e654c7e083',
    iat: 1434695471,
    exp: 1434699071,
    sub: 'user9876',
    iss: 'https://mytrustyapp.com',
    permissions: 'upload-photos'
  }
}

If the token is invalid, the verify method will throw an error which describes the problem:

JwtParseError: Jwt is expired

If you don’t want to throw errors you can use the verify function asynchronously:

nJwt.verify(token,secretKey,function(err,token){
  if(err){
    // respond to request with error
  }else{
    // continue with the request
  }
});

JWTs Made Easy!

That’s it! Creating and verifying JWTs is incredibly simple, especially with the API that nJwt gives you. No go forth and JWT all the services!

But remember: do it securely. While our nJwt library does all the security for the JWT, you also need to ensure that you application is using cookies securely. Please see my other article for an in-depth walkthrough of the security concerns:

Build Secure User Interfaces Using JSON Web Tokens (JWTs)

Happy verifying!

June 21, 2015

Nat Sakimura1passwordのWebSocket 不認証脆弱性について [Technorati links]

June 21, 2015 04:02 PM

さて、MacOS XとiOSのXARA脆弱性について[1]では、もと記事[2]で1passwordを作っているAgileBItsも対策はムズカシイと言っているということについて、「なんでかなー」と疑問を呈したわけですが、AgileBitsの説明[3]を読みに行ってわかりました。そりゃそうだ、ってなもんです。あと、論文の著者たちの書き方は、自分たちの業績を売り込むためなんでしょうが、ちょっと誤解を招くなと。

この論文の著者たちが指摘する1passwordの脆弱性というのは、1passwordブラウザ拡張から1password miniへの通信がマルウェアによって傍受される可能性があるというものでした。1pasword miniは6263番ポートでWebSocketを開けて待ち受けているはずなんですが、1password miniがこのポートを専有する前にマルウェアで専有してしまえば、1passwordブラウザ拡張が送ってくるパスワード他をかっぱらうことができるというものです。逆に言うと、ユーザが入力してかつ1passwordに新たに保存することに決めたたパスワードをかっぱらうことしかできませんです。1passwordに保存済みのパスワードが漏れてしまうわけではありません。

これに対して私はMacOS XとiOSのXARA脆弱性について[1]で「インストール時に1passwordアプリにキーペアを生成させて、公開鍵をブラウザ拡張に持たせて、ブラウザ拡張からポート6263への通信を全てその公開鍵で暗号化してしまうんですけどね。」と書きました。確かにそれはそうなんです。ただ、AgileBits的には、それじゃダメでしょうと。

なぜか。

そんな変なプログラムを仕込まれてしまう状況では、1passwordのブラウザ拡張から1password miniに送られるWebSocketの通信を横取りするよりも、ブラウザへのパスワード入力をそのまま引っこ抜くほうが楽で確実でしょうというわけですね。そりゃそうだ。1password miniが使うポートを乗っ取るのよりも、入力されたパスワードを全て引っこ抜く方がカバレージ全然広いし確実ですからね。

[1] http://www.sakimura.org/2015/06/3100/4/

[2] https://sites.google.com/site/xaraflaws/

[3] https://blog.agilebits.com/2015/06/17/1password-inter-process-communication-discussion/

June 20, 2015

Anil JohnWhat Do Standards Have To Do With Impact? [Technorati links]

June 20, 2015 12:30 AM

A lot, if you ask Joni Brennan, and I do. Have a listen!

What Do Standards Have To Do With Impact?

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.


The opinions expressed here are my own and do not represent my employer’s view in any way.

June 19, 2015

Rakesh RadhakrishnanDichotomous Disintermediation & DIgital Disruption [Technorati links]

June 19, 2015 06:19 PM
By now many folks have read the short book on "Digital Disruption" by James. Analogies such as weight watchers and "loose it" are pervasive now across all industries. Check out "Interpreta"  - a startup that is innovating based on the fusion of several NG technologies including Nano Tech (pan optic), Bioinformatic (Genome technology), Big Data, Cloud, BYOS and more - to create a platform that can personalize medical diagnosis, discovery and delivery. There are Ten Thousand such startups all over the world that are innovating like crazy. Traditional enterprise IT (the big guerrillas) in my experience seem to have a culture of dichotomous thinking - especially when we add a solid dose of "disintermediation" into the digital disruption. In both IT and IT security - if all apps are moving to the clouds (such as workday or success factors for HCM as an example), Infrastructure is moving to the clouds (AWS or Azure as an example), Security is moving to the Clouds -Sec AAS (even Vulnerability Management and Pen testing services are cloud based now) and the BYOD and BYOS model is rapidly being adopted by the Business Unit (disintermediating legacy IT - see blog on Shadow IT) and Enterprise are leveraging a Hybrid Model (5% private cloud investment and 95% public clouds).. and this movement is beyond economies of scale - it is the knowledge of scale as well ( a vendor managing Net DDOS for 1000 enterprises around the globe can do a better job than an in house team of 5 network security architect deploying in premises netDDOS) and with the flexibility to SWITCH vendors, based on performance. This disintermediation is posing a major dichotomous thinking in IT (as project ownership, people's jobs, internal outdated processes mapped to tools, are all getting challenged).

In this new world of Digital Disruption especially in Enterprise IT Security; what Security Architects need to do is embrace these models and leverage the plus points after extensive due diligence to ensure the INTEGRATED set of SecAAS vendors and the private cloud model - that allows for secure cloud connect - enables Cloud Adoption (via a Cloud CGEIT model) with Cloud Control and Compliance. "Cloud Connect, Cloud Control and Cloud Compliance" -- all three are to be well thought out.

When you have use cases where you have one vendor for PAAS with whom you have done Secure SDLC for a custom implementation of a SAAS, and run it on a different vendor for Compute as a Service (like AWS) and the leverage another Storage as a Service vendor (like cleversafe) for security storing what's been computed - you essentially have to deal with - App Sec, Net Sec, Data Sec and all the layers of security in all these vendors environment (as each service -even a IAAS, DBAAS, StAAS, and more) still involve a Data Center, APIs, Data and more. You have just broadened the Attack Surface exponentially !

These types of use cases and the disintermediation model makes Cloud Security Brokers ( or cloud access control service) very powerful as they do the discovery, analysis (using big data technologies and including threat analytics), risk ratings/rankings, trust and reputation verifications, risk based access policies, data protection (DLP and encryption) and compliance reporting end to end..

AWS might have PCI-DSS certifications and Success Factors might have PCI-DSS certification however an enterprises specific implementation and operationalized instance also requires further testing and validation in order to ensure PCI-DSS certification for the App (fully leveraging what the vendors have to offer) is available for the specific implementation and Cloud Sec Brokers can play a huge role for such compliance scenarios - HIPPA/HITECH, PCI, SOX, FFIEC, FIPS, FDA and more.
June 18, 2015

Nat SakimuraMacOS XとiOSのXARA脆弱性について [Technorati links]

June 18, 2015 03:49 PM

今日(6月18日)午後、GigaZineで「iOSとOS XでiCloud・メール・ブラウザ保存のパスワードが盗まれる脆弱性が発覚、Appleは半年以上も黙殺」[1]というセンセーショナルな記事が出ました。まぁ、Webメディアだからしょうがないかという感じではありますが、記事を読んだだけでは何のことやらさっぱりなので、読みましたよ、元の論文。

その論文は、これです。

まずは、著者たちに拍手をしましょう。

その上で:

著者たちが、初めて発見したと主張するゼロデイ攻撃は以下の4つ、細かくは5つに分類されます。

  1. Password Stealing (Keychainのアクセス・コントロール脆弱性)[MacOS X]
  2. Container Cracking (Apple App Storeの、BundleID確認の手違い) [MacOS X]
  3. IPC Interception (3.a WebSocket non-authentication, and 3.b local OAuth redirect) [MacOS X]
  4. Scheme Hijacking [MacOS X, iOS]

このうち、少なくとも3.b と4は実は私たちは少なくとも2013年11月から知っていたもので、現在規格策定の最終段階に入っているOAuth PKCE[3]が解決しようとしている問題そのものです。また、「対処方法は無い」と書かれていますが、正確に言うと、エンドユーザとしてすぐに出来る対処方法は無い、ですね。開発者として自分のアプリが脆弱性を持たないようにする方法はあります。これも以下で紹介します。

[1] http://gigazine.net/news/20150618-ios-os-x-password-killer/

[2] https://sites.google.com/site/xaraflaws/

[3] Sakimura, N., Bradley, J, and N. Agaawal:”Proof Key for Code Exchange by OAuth Public Clients”, IETF, (2015)

1. Password Stealing (Keychainのアクセス・コントロール脆弱性)

CourionAssessing the Risk of Identity and Access, Part 2 [Technorati links]

June 18, 2015 12:30 PM

Access Risk Management Blog | Courion

Venkat Rajaji VP of Product Management & Marketing

In part one of this blog, we shared reasons why your security team may not be able to sleep at night: risks to your information technology infrastructure that may be caused by risk from identities and their access. We discussed the most common access risks—from the routine to those caused by changes in the business—and provided some reasons why you may want to look inside, and not just invest in perimeter security. If you haven’t yet read part one, you can do so here.

So now that we know what the risks are, let’s discuss ways to mitigate these access risks and gain visibility into your organization.

Identity and Access Management Controls

When we look at provisioning identities or certifying access for governance, it quickly becomes a rubber-stamping process. You want to make sure the right people have the right access but what if you don’t know what that person needs for his or her job? Do you reject or approve? Other than a slowdown in productivity, there is no bad outcome if you don’t approve access, but instead request additional sign-offs. After all, with hundreds of thousands of people and identities, access rights and roles, policies and regulations, actions, and resources, you have trillions of access relationships to manage.

In a survey conducted by Courion about the access risks that cause the most anxiety, number one on the list—at 46 percent—was privileged account access; that is, accounts such as those used by administrators that have increased levels of permission and elevated access to critical networks, systems, applications, or transactions. Other anxiety-causing access issues that accounted for 31 percent were unnecessary entitlements and abandoned or orphaned accounts. What this tells us is that over half of the anxiety in your organization is based on provisioning.

To effectively address this issue, we need to start looking at not just passing our audit at the end of the year but also at the true impact of risk created through increased or inaccurate access credentialing on an ongoing basis.

But what if with each request you received you also knew the perceived risk of approving or rejecting it? What if you could take a look at all of your credentials across your system and see who was the greatest risk? That’s where an intelligent or risk-aware identity and access management tool comes in.

With risk-aware IAM you have the ability to automate your provisioning process to keep your backlog at a minimum and still ensure that you are provisioning the correct access to your employees without just rubber-stamping an approval. With intelligence driving your provisioning and governance you can see risks long before you have an issue. Imagine if you were able to log in and see access credentials listed like this:

Risk Aware IAM Table

We need to understand these access risks on a scale from low risk to high. Provisioning today includes a request, a policy evaluation, and a quick approval or rejection of the request. At Courion, we see things differently. If the request is seen as a low risk item, then it gets passed through and fulfilled in our automated system.

Provisioning Tool

But for other access requests which may represent some risk, the access request will require an approval or both an approval and a micro certification.

This micro-certification, or risk-based certification review, provides holistic context around the information being examined, thus allowing an IS manager to make an informed decisions on whether a user’s access is suitable or not before granting access. By performing these narrowly focused, micro-certifications, organizations can reduce access risk in a smarter more efficient way on the front end of the request to guard against over- or under-privileged accounts

 Provisioning Stystem

Intelligent IAM is the next-level evolution of traditional IAM. Each process is led with intelligence with front end approvals and risk assessments that allow near real-time decisions that manage and mitigate risk to the company. According to Gartner, “By year-end 2020, identity analytics and intelligence tools will deliver direct business value in 60 percent of enterprises, up from less than 5 percent today.”

Through continuous monitoring and analytics applied to your provisioning and governance activities in real time, you are able to see the most up-to-date information thus allowing your company to truly make data-driven decisions. With intelligence driving policy, provisioning, and access decisions, you can mitigate risk in real time and have better visibility into your organization.

Are you looking for more visibility into your company’s identity and access risk? With a Quick Scan assessment of your organization’s access risk we can help you take a quick look into your security measures and provide you with a plan of what you can do to mitigate those risks. If you would like more information on what a Quick Scan can do for your company, contact us today at 1-866-COURION or at info@courion.com.  

blog.courion.com

Julian BondIt's 200 years since Sean Bean saved the day at Waterloo. He was younger then. [Technorati links]

June 18, 2015 07:25 AM
It's 200 years since Sean Bean saved the day at Waterloo. He was younger then.

http://www.imdb.com/title/tt0120111/
 Sharpe's Waterloo (TV Movie 1997) »
Directed by Tom Clegg. With Sean Bean, Daragh O'Malley, Abigail Cruttenden, Alexis Denisof. Based on the novel by Bernard Cornwell, "Sharpe's Waterloo" brings maverick Britis...

[from: Google+ Posts]
June 17, 2015

Nishant Kaushik - OracleThe Real Lessons from the LastPass Breach [Technorati links]

June 17, 2015 09:20 PM

Didn’t think I’d be writing back-to-back posts regarding breaches, but that’s the world we live in now. And the LastPass breach is interesting on many levels.

In warning users of the breach, LastPass disclosed that their investigation into the breach showed “that LastPass account email addresses, password reminders, server per user salts, and authentication hashes were compromised”. This news has obviously given the people that were against cloud-based password managers (like LastPass) the ammunition they needed to say “see, we told you this was a bad idea”. The less thoughtful simply call foul without offering any suitable alternative. The more thoughtful ones go after the cloud aspect of this and suggest using desktop-based alternatives like KeePass. KeePass is a good alternative, but when you put it into the context of your usage, your devices, and your work patterns, it has quite a few usability limitations. That forces users to either do additional work to make it usable or (more likely) work around those limitations in ways that negate the security benefits.

Are Cloud-Based Password Managers Still Effective?

Yes, but there’s a big caveat.

By all accounts, the architecture LastPass built worked exactly as intended. In a previous post I described in detail how LastPass and most of the big password managers go about protecting your application passwords. Because they don’t store your actual master password and derived encryption key on the server, the hackers didn’t get those, just the hash of the “generated password” (as I described it in the previous post). Because part of the data the hackers got includes the salt used in hashing, they could use a brute force attack to figure out the actual master password from the data they took away. This post by Robert Graham does a good job describing how the effort to crack a well-formed password is so high as to make any automated cracking of the entire set of authentication hashes nearly useless.

What this means is that the real threat is a targeted effort. Because the data set includes account email addresses, the hackers could search through to find email addresses of known high-value targets (like ‘kevin.feige@marvelpictures.com'; note: not a real email address), and then try to crack just that individual master password. Depending on the complexity of the master password, this goes anywhere from easy to near impossible (as described in Robert’s post). And this is where some key points come in:

And for gosh sake, don't reuse your passwords!

The Big Caveat

The changing of your LastPass master password will stop the hacker from getting into your account and retrieving your application passwords. But since the master password and the other compromised data is the basis for your individual encryption key, it will be a very different problem if the hackers have also gotten the encrypted password vaults. In their security notice, LastPass says that “we have found no evidence that encrypted user vault data was taken”. We all know that absence of evidence is not evidence of absence. If the hackers actually do have the encrypted passwords vaults, then in a targeted fashion, they can take all the time they want to crack an individual master password, generate the encryption key and then decrypt that individuals vault. This is where you’re safe if your master password was really and truly strong, and not so much if it wasn’t.

And why I like that password managers are adding functionality to automatically rotate your account passwords, making a compromised password vault that much less valuable.

Password Reminders. Huh, Yeah. What Are They Good For?

No, the answer is not “absolutely nothing”. They are good for helping hackers compromise your account.

Let’s face it, if a password reminder can help you remember what the password is, then it can also help a determined hacker figure out what the password is. Reminders like “Yankees First Baseman”, “Honeymoon Location”, “Sons DoB”, “Taxi Driver Quote” and “Who the Hell is Bucky?” basically give the hacker all the information they need to just guess what the password is. And if it isn’t that straightforward (because you were smart enough to make it “Th3W1nt3rS0ld13r” instead of “The Winter Soldier”), it still reduces down to an extremely manageable set the universe of possible passwords they need to run through using a cracker like oclHashcat.

So the fact that the LastPass security notice says that password reminders were compromised is bad. That it doesn’t refer to them as “encrypted password reminders” seems really bad (because if they weren’t encrypted, then I call bad on the LastPass team for that). And it once again points out that no matter how much we teach end-users about password strength and hygience, the vulnerabilities that exist in all of the supporting services surrounding passwords (just as I outlined in analyzing the Mat Honan attack) continue to mean that we can’t move fast enough into a post-password world.

The post The Real Lessons from the LastPass Breach appeared first on Talking Identity | Nishant Kaushik's Look at the World of Identity Management.

Kantara InitiativeAnnouncing the formation OTTO WG [Technorati links]

June 17, 2015 08:32 PM

Dear Kantara Community,

We are pleased to announce the formation of the OTTO WG!  OTTO stands for Open Trust Taxonomy for OAuth2. We hope that you will participate in the innovative new work group!

oauth2
The working group will develop the basic structures needed for the creation of multi-party federations between OAuth2 entities. The intent is to create a foundation of trust and drive down the cost of collaboration by publishing technical and legal information. These structures will include the set of APIs and related data structures enabling an OAuth entity to manage which entities it trusts and for other OAuth entities to discover members of the federation and details of the services.

The Work Group is necessary to bring together collaborators from existing SAML federations and the OAuth community to collaborate on a draft solution that meets their shared goals in this area and takes into account lessons learned from the past ten years of SAML.

Specifically, this Work Group is responsible for:

  • Developing a set of use cases and requirements that are specific enough to guide the specification design work
  • Developing a set of modular draft specifications meeting these use cases and requirements
  • Overseeing the contribution of each resulting draft specification to a standards-setting organization

The APIs and data structures will enable discovery of the members of the federation and details about their services, key material and technical capabilities. The final scope will be refined after consideration of the use cases.

Existing SAML Federation XML structures will inform this work, but the data structures will not be expressed in XML but in JSON. The functions supported in existing SAML federations should be supported. Additionally, support for a more efficient and scalable discovery process and dynamic integration process will be considered.

Welcome OTTO WG to our community!

KatasoftOAuth is not Single Sign-On [Technorati links]

June 17, 2015 07:00 PM

We’ve been on a conference blitz over the last few months at Stormpath, and standing in the booth, we get asked a lot of questions about authentication and authorization: protocols, systems, services and security.

Two areas where the misinformation – and therefore misunderstanding – tends to hang out, are Oauth and Single Sign-On. And where they intersect.

To Start, OAuth is not the same thing as Single Sign On (SSO). While they have some similarities — they are very different.

OAuth is an authorization protocol. SSO is a high-level term used to describe a scenario in which a user uses the same credentials to access multiple domains.

What the Heck is OAuth?

OAuth is an authorization protocol that allows a user to selectively decide which services can do what with a user’s data.

For instance, if you attempt to log into Spotify using Facebook, you’ll be redirected to Facebook’s website and will see something like the following:

Spotify Facebook Login

What’s happening is that you’ve authenticated with Facebook directly, and now you’re being asked for Spotify’s permission to access YOUR data. This is an authorization request (eg: what can Spotify do, what can they NOT do?).

OAuth’s primary purpose is to give users more control over their data, so you can selectively grant access to various bits of functionality for various applications that you may want to use.

NOTE: I covered this in depth a few weeks ago in an OAuth specific article I’d highly recommend reading if you aren’t already an OAuth expert: What the Heck is OAuth?

What the Heck is Single Sign-On (SSO)?

Single Sign-On, on the other hand, is a not a protocol — it’s more of a high level concept used by a wide range of service providers (sometimes with confusing differences).

SSO is an authentication / authorization flow through which a user can log into multiple services using the same credentials.

For instance, at your company, you might want to use one set of credentials to access:

Instead of making each employee at your company create different accounts for each of those services they use all the time, you can instead create a single account for each employee that grants them access to all of your company services.

This is SSO.

One of the main benefits to using SSO is that your users have only a single account and password to remember which gets them into all of their services. This typically makes account management / user data storage simpler for employees, as there’s less duplicate data floating around between systems.

If you’re working on projects at a large company, SSO can be a really nice way to manage your users. You have a lot more control over user accounts and user data: you retain this information and interface with providers using the Security Assertion Markup Language (SAML).

Essentially what happens is this:

If you’d like a more in-depth introduction to SSO and SAML, I’d highly recommend reading the Salesforce Single Sign On Guide. It does a great job of explaining what all the benefits of traditional are, and how to implement things properly.

Should I Use OAuth or SSO?

At the end of the day, there are really two separate use cases for OAuth and SSO.

If you want your users to be able to use a single account / credential to log into many services directly, use SSO.

If you want your users to have accounts on many different services, and selectively grant access to various services, use OAuth.

And… If you want to support either OAuth or SSO, go create a Stormpath account! We make it really easy to do both!

Nat Sakimura3つのゴールドベルグ変奏曲〜グレン・グールド、マリア・ティーポ(ブゾーニの孫弟子)、筋肉ピアニスト-ツィモン・バルト(ブゾーニ盤)とブゾーニの演奏を巡って [Technorati links]

June 17, 2015 06:15 PM

今日、夜中の電話会議が終わってナクソス・ミュージックライブラリを開けたら、今週の一枚としてマッスル・ピアニストことツィモン・バルトのゴルトベルグ変奏曲(F.ブゾーニ盤)が出ていた。なんでもナクソスイチオシのピアニストらしい。

J.S.バッハの通称「ゴールドベルグ変奏曲」の正式名称は「2段鍵盤付きクラヴィチェンバロのためのアリアと種々の変奏」 (Clavier Ubung bestehend in einer ARIA mit verschiedenen Veraenderungen vors Clavicimbal mit 2 Manualen)  (BWV 988)であり、全4巻からなる「クラヴィーア練習曲集」の第4巻をなす。1742年に出版されたこの曲は、チェンバロ時代が終わりピアノ時代になってからは長らく忘れられていた曲だが、モダンチェンバロをつかったランドフスカの演奏もさることながら、なんといってもグレン・グールドのデビュー録音の大ヒットによって広く知られるようになった曲と言って良いだろう。

そんなわけもあって、ゴールドベルグ変奏曲と言ったら、グールドの新旧録音、そう僕は長いこと思っていた。特に、82年10月にグールドが無くなった直後、前年の新録音を聞いた時の衝撃はすごかった。今でも、ナイロビのわが家にマーク・オバマ・ンディサンディオ[1]がLPを抱えて「まぁ聞け」とやってきてかけて、二人でしびれたのをよく覚えている。1時間が一瞬であった。


Goldberg Variations (CD)


   明確なリズム、引き込まれるような鋭利なアプローチ、そして対位法による演奏で、衝撃のデビューを飾った1955年のゴールドベルク変奏曲と比べると、この1981年の再録音は驚くほど違った演奏になっている。1981年の方は、もっとゆっくりしたペースで、シンプルに表現されており、装飾には深い熟考のあとがうかがえる。また、テンポが見事に組み立てられている(人によっては、やや大げさに聞こえるかもしれないが…)。1955年の時は反復は一切なかったが、今回はカノン、フゲッタ、その他のフーガ調の変奏でAパートの反復が見られる。素早く手を交差させながら正確に鍵盤をタッチする指さばきは健在で、感嘆せずにはいられない。しかし、ゆっくりなテンポの時の方がこの曲の舞踏的要素をうまく表現しているようだ。(Jed Distler, Amazon.com)
List Price: ¥ 1,623
New From: ¥ 713 In Stock
Used from: ¥ 710 In Stock

グールドの録音を信奉したのはそれだけが理由では無い。当時、マークの師匠でも有り妹の師匠でもあった、数少ないヴィルヘルム・ケンプの直弟子のミセス・デイヴィスも絶賛していたのだから、当時高校生だった私が影響を受けないわけは無い。

それが覆ったのはごく最近、マリア・ティーポを聞いてからだ。ミラノの女ホロヴィッツと称され、あのアルゲリッチが尊敬する女流ピアニストであるマリア・ティーポの演奏は、グールドがピアノでチェンパロでの演奏を再生したとしたら、あたかもオーケストラ付き合唱を再現したようであった。とにかく教会での合唱が聞こえてくる。テノール、バス、アルト、そしてそれにかぶせてソプラノが出てくる。「あー、バッハはチェンバロでこれをやりたかったんだろうな。」そう思えてくる演奏だ。


Bach: Goldberg Variations & Italian Concerto etc (MP3 ダウンロード)


New From: ¥ 1,800 In Stock
Used from: Out of Stock

マリア・ティーポは日本ではあまり知られていない、いや、ヨーロッパでもあまり知られていないかもしれないが、ものすごいピアニストだと思う。アルゲリッチが2000年のインタビューで「マリア・ティーポ。彼女はセンセーショナルだ。彼女がもうピアノを弾かなくなった(のは残念だ)」[2]と言っているのからも分かるだろう。

マリア・ティーポはイタリアのピアニストで、母親はブゾーニの直弟子、アルフレッド・カゼッラの弟子である。1931年生まれで、17歳でジュネーブ国際コンクールで優勝している。1955年のアメリカツアーの折にたった4時間で録音したスカルラッティの12曲のソナタは「ニューズウィーク」誌により「今年最も優れたレコード」と絶賛されている。「ナポリの女ホロヴィッツ」の面目躍如である[3]


Scarlatti Sonatas (MP3 ダウンロード)


New From: ¥ 1,600 In Stock
Used from: Out of Stock

さて、ここで一回り回ってツィモン・バルトのゴルトベルグ変奏曲である。使っているのはブゾーニ版の楽譜、つまり、マリア・ティーポが直系に連なるイタリアの大ピアニスト・作曲家が編纂した版である。


Bach: Goldberg Variations, BWV 988 (MP3 ダウンロード)


New From: ¥ 1,500 In Stock
Used from: Out of Stock

出だしからグールドを超えるほどの遅さと、グールドには見られない「ロマン派」的なリズムの揺れを見せるのだが…。どうなんですかね…。慣れればそれはそれで良いのかもしれないが…。ちなみに、ブゾーニ本人のバッハの演奏を聞くと、そんなにテンポ揺らさないですよ。リストもあっさりしてますしね。ロマン派の人たちって、一般に現代のわれわれが考えるよりもずっとあっさりしたテンポの早い演奏しますよね…。

という訳で、ブゾーニの残した数少ない録音の中で、生前のブゾーニの演奏を知るブゾーニの孫弟子、Gunnar Johansenがブゾーニを伝える唯一のピアノロール録音と語る録音で最後は締めることにしよう。

リストの「鬼火」。F. ブゾーニの演奏で、どうぞ。うまい、よねぇ。ケレン味なくすごくあっさりひいていながらダイナミックで。

【脚注】

[1] 当時は彼の兄がアメリカの初代有色人種大統領になるとはつゆ知らず…(笑

[2] イタリア・Radio 3「An Interview with Martha Argerich」(2000/2/16) http://www.andrys.com/argitaly.html 同門のアバドと一緒にインタビューを受けている。

[3] 硬質の音とともに、スカルラッティを得意とするところ、そしてアルゲリッチを上回るとまで言われるテクニックも、ホロヴィッツを彷彿とさせるのだろう。

Julian BondToday's surprising mini-beast in the garden. A Rose Chafer beetle. Approx 20mm. Ware, Hertfordshire,... [Technorati links]

June 17, 2015 02:22 PM
Today's surprising mini-beast in the garden. A Rose Chafer beetle. Approx 20mm. Ware, Hertfordshire, UK.

http://www.uksafari.com/rosechafers.htm

[from: Google+ Posts]

Julian BondIt may look like big picture planning but actually it's just emergent behaviour from the Hive Mind. [Technorati links]

June 17, 2015 01:12 PM
It may look like big picture planning but actually it's just emergent behaviour from the Hive Mind.

So which Ant Hive in the Hive Super-Collective gave the nuclear launch controls to a Google autonomous car? It's probably OK because the only accidents the robot cars ever have is when they're rear-ended by cars driven by humans. 

http://xkcd.com/1539/
http://www.engadget.com/2015/06/05/google-self-driving-car-report/
 xkcd: Planning »
Warning: this comic occasionally contains strong language (which may be unsuitable for children), unusual humor (which may be unsuitable for adults), and advanced mathematics (which may be unsuitable for liberal-arts majors). BTC 1FhCLQK2ZXtCUQDtG98p6fVH7S6mxAsEey ...

[from: Google+ Posts]

Nat Sakimura先週気になったプライバシー関連ニュース(2015/6/10〜17) [Technorati links]

June 17, 2015 03:10 AM

News書こうと思ったことが山積みでまったくかけていない今日このごろですが皆さんいかがお過ごしでしょうか。

ブログをしっかり書く時間が無いので、ネタとして書こうかなと思ったニュースの一覧のみを。

仏当局、グーグルに全世界での「忘れられる権利」適用を指示

[パリ 12日 ロイター] – 個人情報保護を扱うフランスの独立行政機関CNILは、米グーグル(GOOGL.O: 株価, 企業情報, レポート)に対し、現状にそぐわない過去の個人情報に関して削除を求められた場合、欧州だけでなく全世界のネット検索結果から削除するよう指示した。15日以内に従わない場合、制裁措置に踏み切るという。(出所)ロイター

仏蘭西も随分攻めてきますね。そこまで駆り立てるのは何なのでしょうか…。

「あなた既に転出されてますが」 私の住民票、誰がなぜ

(高橋淳2015年6月15日11時28分)
ある日突然、知らない土地に、自分の住民票が異動させられていたとしたら。そんな奇怪な出来事が静岡県富士市に住む男性の身に起こった。誰が、何のために――。なぞを追った。(出所)朝日新聞デジタル

これ、ISO/IEC 29115とかの身元確認プロセスでLevel 2以上をやっていたらこういうことは基本起きないはずなんですけどね。結局、「誰が確認したか」よりも「どのように確認したか」の方がよほど大切ということの証左であります。住民基本台帳はマイナンバーカードを発行する際の基本的なデータベースなわけですから、ここの運用はもっとしっかりやらないと。ちなみに、本気で高いレベルのクレデンシャルを発行しようと思ったら、根本的にやり方変えないとだめです。まずは公務員あたりから身元確認をやり直して、そこをトラストアンカーにして徐々に広げていかないとね。

情報公開・個人情報保護審査委員会

情報公開・個人情報保護審査委員会は,平成27年7月1日から最高裁判所に設置される諮問機関です。
委員会は,外部有識者3人で構成され,最高裁判所の諮問に応じ,全国の裁判所がした司法行政文書又は保有個人情報の開示・不開示等の判断について調査審議の上,答申を行います。裁判所は,答申を尊重して対応を行うこととなります。
また,答申の内容は,順次公表する予定です。(出所)最高裁判所

情報公開がらみですね。

EUカウンシルがEUデータ保護規則に同意

(2014/6/15)

Today, Justice Ministers in the Council reached a General Approach on the new data protection rules confirming the approach taken in the Commission’s proposal back in 2012. Trilogue negotiations between the Council, the European Parliament and the EU Commission will start next week on 24 June. (出所)Privacy Laws & Business

EUカウンシルが新データ保護法への方針に同意したとのこと[1]。2012年のコミッション提案の多くを踏襲しているとのこと。(例:EU Directive→EU Regulation,(EU市場でサービス提供する)域外企業に対する適用、(制限付き)忘れられる権利、データポータビリティ)。

6月24日からEUカウンシルと議会とコミッションのトライアローグが始まるとのことです。

 

 

June 16, 2015

Rakesh RadhakrishnanShine a Light on Shadow IT - SKYHIGH [Technorati links]

June 16, 2015 10:27 PM
I attended an amazing presentation at ISACA LA in May 2015 - where the speaker (Glenn Wilson) highlighted the problems associated with Shadow IT (Cloud models being embraced by business units blind sighting IT) and how it is a real problem for IT (CIO and CTO's)..  From a CSO perspective you really cannot protect anything if you do not know what you need to protect.  This IDC write-up also states what drives Shadow IT in an enterprise - and how it is truly a problem for both the CIO and the CFO. In reality it is a headache for CSO, CPO and CCO (security, privacy and compliance officers in major enterprises). I have seen this to be a pervasive problem in large enterprises first hand and I believed only organizational behavior modification oriented solutions exists, in., Banking, Pharma or Retail, etc., so far (and via standards and policies for cloud adoption).


SkyHigh Networks a Silicon Valley startup headed by an Industry Veteran (Rajiv Gupta) has done an amazing job addressing this core and pervasive problem in all enterprises -going beyond a typical Cloud Data Security vendor. 

From an enterprise security architecture perspective I also tend to review any security vendors integration capabilities (similar to Fireye's integration for example). SkyHigh Networks has done an amazing job integrating their Cloud DLP/Cloud Security system with;
a) Bluecoat Web Proxy
b) Palo Alto NGFW and Panaroma
c) Safenet
d) Ping
e) Okta
f) MS RMS
i) and more cloud offerings..

An amazing opportunity for every enterprise IT to address this critical, core and systemic problem (shadow IT) with a POC with SkyHIGH Networks ! and at the same time address Compliance in the Clouds (HIPPA, PCI and more).
 

MythicsTry SPARC for Free! [Technorati links]

June 16, 2015 02:35 PM

Mythics, through our partnership with Oracle-Fujitsu, is offering its M10 servers through a loaner program that lets organizations bring in the systems for free to…

CourionCyber Security Tech Tuesday Roundup [Technorati links]

June 16, 2015 12:30 PM

Access Risk Management Blog | Courion

cyber security

Think you caught everything this week in the world of Cybersecurity? Here are a list of the top articles that grabbed our attention.

 

Security Metrics - Don't be thrown off by the haircut analogy; this blog is a great look at how we translate our efforts into a meaningful context. Security and IT departments are missing a way to communicate their value in terms that non-security professionals can understand and evaluate and Joshua does a great job of bringing this to light.

Joshua Goldfarb, DarkReading.com,

 

'Your PC May Be Infected!" Inside the shade world of antivirus telemarketing - We all spend money securing the perimeter—holding up the firewall—but do we spend enough time training all of our employees on the possibility of PC security scams? This $4.9 billion industry is built around calling, emailing, or sending pop-up messages to your employees warning them about a breach and offering to help.

Jeremy Kirk, CSOOnline.com

 

Why the Firewall is Increasingly Irrelevant - Funny how we discussed last week at Ping’s Cloud Identity Summit that up to 85% of security budgets are being spent on protecting your perimeter and that your biggest threats are from inside the organization. Asaf Cidon has a different take on the same concept: protecting the perimeter is futile.

Asaf Cidon, DarkReading.com

 

The Rise of Cyber Extortion - We all remember the Sony hack and the introduction of the first widespread use of cyber extortion. It looks like the holding hostage of the Sony data was just the beginning in the rise of this new cyberattack. From denial-of-service attacks to ransomware, this is a great article updating us all on the rise of cyber extortion.

Danielle Au, Security Week 

blog.courion.com

Kaliya Hamlin - Identity WomanWe “won” the NymWars? did we? [Technorati links]

June 16, 2015 04:34 AM

Short answer No – I’m headed to the protest today at Facebook.

A post about the experience will be up here by tomorrow. I’ll be tweeting from my account there which is of course @identitywoman

 

______

Post from Sept 2014

Mid-July,  friend called me up out of the blue and said “we won!”

“We won what” I asked.

“Google just officially changed its policy on Real Names”

He said I had  to write a post about it. I agreed but also felt disheartened.
We won but we didn’t it took 3 years before they changed.

They also created a climate online where it was OK and legitimate for service providers to insist on real names.

For those of you not tracking the story – I along with many thousands of people had our Google+ accounts suspended – this posts is an annotated version of all of those.

This was the Google Announcement:

When we launched Google+ over three years ago, we had a lot of restrictions on what name you could use on your profile. This helped create a community made up of real people, but it also excluded a number of people who wanted to be part of it without using their real names.

Over the years, as Google+ grew and its community became established, we steadily opened up this policy, from allowing +Page owners to use any name of their choosing to letting YouTube users bring their usernames into Google+. Today, we are taking the last step: there are no more restrictions on what name you can use.

We know you’ve been calling for this change for a while. We know that our names policy has been unclear, and this has led to some unnecessarily difficult experiences for some of our users. For this we apologize, and we hope that today’s change is a step toward making Google+ the welcoming and inclusive place that we want it to be. Thank you for expressing your opinions so passionately, and thanks for continuing to make Google+ the thoughtful community that it is.

There was lots of coverage.

Google kills real names from ITWire.

Google Raises White Flag on Real Names Policy in the Register.

3 Years Later Google Drops its Dumb Real Name Rule and Apologizes in TechCrunch.

Change Framed as No Longer Having Limitations Google Offers Thanks for Feedback in Electronista

Google Stops Forcing All Users to Use Their Real Names in Ars Technica

The most important was how Skud wrote a “real” apology that she thought Google should have given:

When we launched Google+ over three years ago, we had a lot of restrictions on what name you could use on your profile. This helped create a community made up of people who matched our expectations about what a “real” person was, but excluded many other real people, with real identities and real names that we didn’t understand.

We apologise unreservedly to those people, who through our actions were marginalised, denied access to services, and whose identities we treated as lesser. We especially apologise to those who were already marginalised, discriminated against, or unsafe, such as queer youth or victims of domestic violence, whose already difficult situations were worsened through our actions. We also apologise specifically to those whose accounts were banned, not only for refusing them access to our services, but for the poor treatment they received from our staff when they sought support.

Everyone is entitled to their own identity, to use the name that they are given or choose to use, without being told that their name is unacceptable. Everyone is entitled to safety online. Everyone is entitled to be themselves, without fear, and without having to contort themselves to meet arbitrary standards.

As of today, all name restrictions on Google+ have been lifted, and you may use your own name, whatever it is, or a chosen nickname or pseudonym to identify yourself on our service. We believe that this is the only just and right thing to do, and that it can only strengthen our community.

As a company, and as individuals within Google, we have done a lot of hard thinking and had a lot of difficult discussions. We realise that we are still learning, and while we appreciate feedback and suggestions in this regard, we have also undertaken to educate ourselves. We are partnering with LGBTQ groups, sexual abuse survivor groups, immigrant groups, and others to provide workshops to our staff to help them better understand the needs of all our users.

We also wish to let you know that we have ensured that no copies of identification documents (such as drivers’ licenses and passports), which were required of users whose names we did not approve, have been kept on our servers. The deletion of these materials has been done in accordance with the highest standards.

If you have any questions about these changes, you may contact our support/PR team at the following address (you do not require a Google account to do so). If you are unhappy, further support can be found through our Google User Ombuds, who advocates on behalf of our users and can assist in resolving any problems.

BotGirl chimed in with her usual clear articulate videos about the core issues.

 

 

And this talk by Alessandro Acquisti surfaced about. Why privacy matters

 

Google has learned something from this but it seems like other big tech companies haven not.

 

Rakesh RadhakrishnanDB Firewalls for DB AAS and Big Data in the Clouds [Technorati links]

June 16, 2015 12:35 AM

AWS offers several types of Database as a Service offering including;                                  a) AWS RDS,  b) AWS DynamoDB (no SQL) and c) AWS Big Data.  When such data repository technologies are offered as a service what is also critical is a DB Firewall as a Service that wraps these DB AAS models for security, compliance, privacy, privileged access and more. The DB Firewall as a Service must also support policy expressions in XACML so an enterprise can author the same and propagate the same policies into a private cloud implementation (for example with a Oracle 12C maybe on a ESX image) and into a public cloud (including VPC and on a AMI image), as a virtual appliance. Its great to see market leading DB Firewall products now support AWS cloud models as well, see part 1 and part 2 of IBM Guardium capabilities to do so (including RDS and S3 support). Guardium can support XACML for access policies, extraction policies and exception policies for access control at the DB layer. It can also support BigData and NoSQL repositories (not sure about the cloud offerings of No SQL and Big Data from AWS though!). These were requirements and use case that I've had since 2010/2011.. (which were mostly GAPS), its great to see some movement in this space. As enterprises are rapidly moving towards the cloud model with one consolidated datacenter (private cloud with secure cloud connect technologies) and several public clouds, the DB FW appliances play a critical role when designed and deployed in a FIPS forensics certified manner protecting data in both environments consistently, cohesively and comprehensively (for global deployments too). These appliance with its defined set of resource profiles (DB object naming) in essence compliments the DAL (the data abstraction layer) in Secure SAAS - by ensuring that data can be accessed and manipulated (read, write, update or any other operations) based on compliance policies end to end.
June 12, 2015

Nishant Kaushik - OracleQuick Thoughts regarding the Kaspersky Labs Intrusion [Technorati links]

June 12, 2015 04:30 PM

Kaspersky Labs has revealed this week that their corporate network was subject to a sophisticated cyber-intrusion that leveraged a new malware platform. Their investigation is ongoing, and they have found the malware to have been used against other victims as well. So while I am sure there are more details that they will reveal, I did have some instant reactions that I couldn’t fit into a tweet, so decided to gather them here:

Oh, and one more thing:

SwiftOnSecurity on KasperskyLabsIntrusion

 

The post Quick Thoughts regarding the Kaspersky Labs Intrusion appeared first on Talking Identity | Nishant Kaushik's Look at the World of Identity Management.

Julian Bond30 years this month since the Battle of the Beanfield. 1 June 1985. [Technorati links]

June 12, 2015 03:48 PM
30 years this month since the Battle of the Beanfield. 1 June 1985.

http://www.theguardian.com/uk/2005/jun/12/ukcrime.tonythompson
https://en.wikipedia.org/wiki/Battle_of_the_Beanfield
 Twenty years after, mystery still clouds Battle of the Beanfield »
This month marks the 20th anniversary of what has become known as the Battle of the Beanfield. 537 Travellers were arrested - the most arrests to take place on any single day since the Second World War.

[from: Google+ Posts]
June 11, 2015

Nat Sakimuraサイバーセキュリティ国際会議、沖縄で11/7,8に開催へ [Technorati links]

June 11, 2015 08:15 AM

TBSNews iの報道[1]によると、政府が今年11月に沖縄県でサイバーセキュリティに関する国際会議を開催するとのことです。

日程は、11月7、8日で沖縄県名護市のリゾートホテルで行なわれ、現在、ダボス会議を主催する「世界経済フォーラム」を通じて、世界各国の財界人や企業、法律の専門家などに出席を呼びかけているとのこと。日本側からは山口IT政策担当大臣が出席する予定のほか、安倍総理の出席についても検討されているらしい。

おりしもその前の週は横浜でIETF (インターネット技術タスクフォース)の横浜会合をやっています。IETFは、で利用される技術を策定する組織で、インターネットはIETFによって作られていると言っても過言ではありません。年に3回、世界回り持ちで総会を開き、最新の技術策定を行っています。もしオープン参加ならば、この横浜会合に来日したインターネット技術界の重鎮が参加しに行くことも考えられそうです。

[1] TBSNews i 『サイバーセキュリティ国際会議、沖縄で開催へ(2015/6/11)

 

June 10, 2015

Julian BondInstead of repurposing shipping containers, repurpose a whole tanker into a popup village. It's a rather... [Technorati links]

June 10, 2015 07:34 PM
Instead of repurposing shipping containers, repurpose a whole tanker into a popup village. It's a rather fanciful answer to the question of what happens to old oil tankers when global geopolitics and peak oil means they're no longer needed. 

http://www.nextnature.net/2015/06/from-discarded-mega-oil-tanker-to-village/comment-page-1

They'll need to allow for sea level rise when burning the old contents of the ships results in global warming.

I still have a Loompanics book on the shelves called "Free Space" exploring strategies for living space outside traditional nation state jurisdictions. One possibility was taking old tankers, loading them with soil and then scuttling them on one of the Pacific Atolls that lies just beneath the waves to make an instant island. The idea above feels remarkably like this. Then there's the abandoned aircraft carrier arcology from Snowcrash. 

The big problems with all these ideas is the cost of moving the abandoned ship. And the scrap value of the metal. Getting and maintaining control is not going to be cheap.
 
From Discarded Mega Oil-Tanker to Village « NextNature.net  »


[from: Google+ Posts]

Neil Wilson - UnboundIDUnboundID LDAP SDK for Java 3.0.0 [Technorati links]

June 10, 2015 07:18 PM

We have just released the 3.0.0 version of the UnboundID LDAP SDK for Java. It is available for download via the LDAP.com website or from GitHub. There are a few pretty significant announcements to accompany this release:

The following additional features, bug fixes, and enhancements are also included in this release:

Rakesh RadhakrishnanSecurity a "Stumbling Block or a Building Block" for Cloud Adoption [Technorati links]

June 10, 2015 06:51 PM
Security can be an impediment (stumbling block) for Cloud Adoption by an enterprise or when done right can be an enabler (building block) for Cloud Adoption. This is so true. There are so many data breach incidents that are caused due to insecure designs of applications and data in the clouds (the notorious nine from CSA). When done right, the security architecture can become an enabler of Cloud Adoption as well. The shared responsibility model proposed by AWS maps to what the CSA has stated in the past especially when Enterprise consumers are using Cloud platforms for IAAS. AWS has done an amazing job in terms of ensuring Data Center Security, Physical Security, Personnel Security, Network Security, Secure DNS, Disaster Recovery and more., without which AWS cannot be certified for;
a) ISO 27001 Certified for all their Data Centers globally. (which implies that the people, process and technology tools have been validated against all ISMS requirements.
b) AWS is conducive to host HIPPA compliant medical applications in the Cloud. (read the May 2015 paper from AWS).
c) AWS Data Centers are PCI DSS level 1 compliant (version 3) and its a shared responsibility with the enterprise that needs to host a PCI compliant application.
d) AWS has published SOC 1, SOC 2 and SOC 3 reports based on AICPA Trust Service principles.
e) In addition a number of collateral for FedRAMP, FISMA, FIPS, FERPA and many more are posted here.

Majority of these certifications are based on the Security Technologies (including people and process) that are in place at the Data Centers, including;
Vulnerability reporting and Penetration testing
Cloud Security technical capabilities (such as MFA and AES)
and Number of Security resources for Developers, Designers and Deployment Engineers
(including the AWS security best practices paper).

However similar to what Shanket Naik from Coupa Clouds eloquently articulates (Coupa is a Pay Optimization SAAS and PAAS) hosted in AWS (in this 20 minute video), and his blogpost, it is quite clear in the shared responsibility model, a lot needs to be done from an application and data security standpoint to align and augment with the security capabilities offered by AWS.

These are what is described in my 5 dominant defensive design blogposts..  The enterprise that is moving towards the clouds with their apps need to ensure;

a) Secure SDLC processes
b) Application Vulnerability management
c) Application Pen Testing (webinspect)
d) URL fiewalls (like bluecoat)
e) APT firewalls (like fireeye)
f) App Firewalls (like Imperva)
g) XML Firewalls (like Layer 7)
h) API firewalls (like vordel)
i) FGEX (like Axiomatics)
j) DB firewalls (like Guardium)
k) DLP firewalls (like nextlabs)
l) Data Tokenization (like Intels)
m) Integrity check tools (like sign a cert)
n) SIEM integration (like splunk)
o) host IDS (like Dell)
p) host IPS (like Dell)
q) SCIT (like scitlabs)
r) Self Cleansing Containers (waratek)

and many more to ensure that the application and data hosted in the AWS cloud can also be secured by design and be compliant with HIPPA, PCI, and more (privacy baked in).

This is the AWS customer responsibility space and enterprises that have a mature Secure SDLC wrapped with Integrated Security tools that protect the code and the data along with a rigorous change management and testing processes can truly ensure that security is an enabler.

This has been my recent experience 2013-2015- where the integrated security tools (18 of them from 18 best of breed vendors) all also supported their security system as a virtual appliance (both ESX and AMI support) - and FIPS forensics compliant deployment capability (like Guardium), so that they can easily run in the AWS cloud.

This approach also alleviates enterprises (with direct connect, virtual HSM, etc.,)  from focusing on Infrastructure security (Data Center, Network Security, DNS, NTP, DHCP, including Virtual Desktop and NAC) and frees up resources to focus on modernizing, mobile-enabling and maturing the application space !






 

CourionAssessing the Risk of Identity and Access [Technorati links]

June 10, 2015 05:30 PM

Access Risk Management Blog | Courion

describe the imageHere at Courion, our mission is to help customers succeed in a world of open access and increasing threats. We want to make sure that the right people have the right access to the right resources and that they are doing the right things with those resources. The question becomes, how does an organization assess those threats and gauge the risk it faces from both internal and external forces? Moreover, how do you plan for that risk and put in place processes to help detect identify and manage the risk?

Most Common Risks

With an increasing number of computers and other devices and an increase in the ways in which users access resources, access rights and the monitoring and managing of complex user access rights becomes harder every day. The stresses and strains of access can come from all over but the most common offenders are:

• Routine changes such as hiring, promotions or transfers

• Business changes such as reorganizations, the addition of new products, or new partnerships

• Infrastructure changes such as mobility, cloud adaptation, system upgrades, or new application rollouts.

Routine vs Business vs Infrastructure Change

In addition to the stresses from business change, there are an increasing number of government regulations that require compliance, regardless of industry. From healthcare to banking, these regulations climb into the hundreds and assuring that you are fully compliant is more difficult than ever. This increase in regulations along with the increase in complexity of access rights makes identity and access governance a red hot priority.

What is Identity and Access Governance?

Identity and access governance tools establish an entire lifecycle process for identities in an organization, providing comprehensive governance of not just the identities but also their access requests. These lifecycles decisions are developed through real time intelligence and are informed by an organization’s processes. When we are preparing for an audit we have to ask questions we had never been asked before: Who has access to what? What does that access allow them to do? And why do they need that access? IGA helps to answer those questions up front to ensure that every identity has the right access, to the right things, at the right time.

When the internet was brand new, an organization had one room with only two to three people having access to resources. As a result, there was a pretty low risk of anyone hacking their way in. Now, our data centers are everywhere from a server room in a remote location to the cloud of everywhere-ness.

The result is that we have a broader and ever exploding attack surface and diversity of infrastructure. You’ve heard of the “Internet of Things” and these “things”, that is, Internet-enabled devices and resources, such as a building thermostat or a household appliance, have increased the attack surface tenfold.

Unfortunately, we also are faced with e a super sophisticated attacker ecosystem. Hackers are now working collaboratively, looking for weaknesses in your infrastructure and are armed with increasingly sophisticated and specialized tools and services. It may only take a hacker a few minutes to get into your system, but now they know that the payoff is worth waiting days or even months for the perfect time to strike.

The Issue of Compliance

If you look at the most recent Verizon PCI Compliance Report you will see that the average organizational compliance is at 93.7%. However, when you break that number down into the number of fully versus partially compliant firms, you will see that only 20% are ‘fully’ compliant. So if as organizations we collectively are compliant at 93.7%, then why have the total number of security incidents detected increased 48% since 2013? The answer is that we need more visibility into our systems. The top audit findings for the reasons behind these attacks are:

• Excessive access rights

• Excessive developers’ access to production systems and data

• Lack of removal of access following a transfer or termination

• Lack of sufficient segregation of duties

The biggest risk here is credentials. The number of stolen credentials is no surprise when you consider the number of transfers and terminations and accounts with excess access to sensitive systems that may remain active.

According to the Verizon Data Breach Investigations Report, 2015, when asked if their organization is able to detect if access credentials are misused or stolen, 42% of companies surveyed in the report said they are not confident in their ability.  Even worse, according to CSOOnline, 66% of board members are not confident of their companies’ ability to defend themselves against any cyberattack.  For those of us on the information security team, that shows a lack of boardroom trust in our capabilities.

Why do board members have so much trouble trusting our cybersecurity measures? Consider the fact that in 60% of cases, attackers are able to infiltrate the system within minutes and it typically takes information security around 225 days to find the breach. Just recently, the U.S. government Office of Personnel Management was hacked and more than 4 million current and former government employees may be affected. While investigators have known about the breach since April, they are still trying to determine what was hacked and what information was leaked since it could have been up to six months since the attackers initially gained access into the system.

Preparing for an Attack

This attack makes us think about the elements of an attack and where our federal government’s systems may have broken down. The elements of an attack are:

Data Breach Lifecycle

While we have anti-virus and anti-malware to fend off some of these attacks, and DLP and SIEM processes in place to fend off or detect others, we do not have the ability to fully defend against access targets and lateral movement once access is gained. What this means is that even though we are spending money, sometimes up to 85% of our budget on defending the perimeter, we have little to no security on the inside stopping hackers once they have penetrated our networks.

Are you ready for an attack on your system? Do you have a plan for internal and external breaches? Do you know your current risk? In part 2 of “Assessing the Risk of Identity and Access” we will discuss ways you can measure your perceived risk and ways to monitor your access rights to ensure true compliance.

Want to know your risk? Contact us today for an Access Risk Assessment of your system to identify your risks today.


blog.courion.com

Nat Sakimura【本日の情報漏洩】東商の会員情報、最大1万2000件流出=PCウイルス感染、警視庁捜査へ [Technorati links]

June 10, 2015 02:55 PM

ここのところ、スパートがかかって、日替わりデータ漏洩となってきておりますな。

時事通信の報道[1]によると、今日10日は、東京商工会議所が標的型攻撃を食らって、データ漏えいをした旨の発表があったようです。国際部のファイル共有サーバーに保管されていた過去3年間のセミナー参加者の名簿などが漏洩だそうです。内容は、名前、電話番号、メールアドレス、社名など。銀行・証券口座など金銭関連の情報は含まれていないそうです。警視庁は不正指令電磁的記録供用の疑いなどを視野に捜査する方針だそうな。経産省への報告は4日[2]

さらに、記事曰く

個人情報へのアクセスは国際部職員に限られていたため、パスワードは設定していなかった。現時点で被害の報告は入っておらず、感染したパソコンは1台だけという。

って…orz。こういう境界セキュリティの考えはもうやめて欲しいものですな。ちゃんとデータに対してアクセス制御をかける。

ん、まてよ。パスワードを設定していなかったって、妙な書き方だな。ひょっとして、データベースに入れて、国際部のIPアドレスからのみDBにアクセス可能にしていたとかじゃなく、エクセルか何かのファイルで、それにパスワードを掛けていなかったとかいうのかな…。

【脚注】

[1] 時事通信『会員情報、最大1万2000件流出=PCウイルス感染、警視庁捜査-東商』 http://www.jiji.com/jc/zc?k=201506/2015061000093&g=soc (2015/6/10取得)

[2] 時事通信『再発防止求める=東商情報流出-菅官房長官』http://www.jiji.com/jc/zc?k=201506/2015061000403&g=eco

 

Nat Sakimura「番号」設計のあるべき姿 〜 年金番号漏洩事件によせて [Technorati links]

June 10, 2015 02:29 PM

年金番号漏洩事件では、「漏洩した番号は全て変更する」のだそうです[1]。個人的には「あーあ」という感じでありんす。昨日の記事[2]でも書いたとおり、適切に運用していれば、番号自体の漏洩は大したリスクではなく、一緒に漏れた住所氏名他が変えられない以上、年金番号だけ変えてもあまり意味が無いからです。

逆に、設計の古い年金番号は、変えるとなると、連動して変えなければいけないところがあった場合にうまく変わらないことが想定され、そのことがかえって被害を産む恐れもあります。

「番号」(本当は識別子と呼ぶべきですが、ここでは便宜的に「番号」と呼びます)の設計というのは、想定される利用形態によって様々な考慮点があります。したがって、『「番号」設計のあるべき姿』はある意味ケースバイケースということにはなります。しかし、一方では、最低限満たすべき要件というものもあるのですね。

という訳で、ちょっとリストアップしてみましょうか、「番号」のそういう要件を。

  1. 主キーとなる識別子、「個人番号」を作る。これは基本不変。変えたくないので、使う「番号」(以後、「番号」)の内部的管理にしか使わない。もちろん門外不出。
  2. 「番号」は、発行日、有効化日、停止日、再有効化日、廃止日[3]を持ち、主キーに紐付けて管理する。
  3. 「番号」には、ユニークな形式を導入する。たとえば、3桁目がカタカナで、4桁目がチェックサム、とか。これは、データが漏洩した時に、この形式のものは検索エンジンに引っかからないようにとかするため。
  4. 「番号」は有効期限を持つ[4]
  5. 「番号」はいつでも変更可能。管理システムは、変更するためのAPIを持つ。
  6. 組織は「番号」を受け取ったら、(「番号」管理組織の提供する)組織別「番号」発行APIに、「番号」「組織番号」「組織クレデンシャル」を提示し、当該個人の「組織別番号」を取得する。「番号」は即時廃棄する[5]。以後、当該組織は、この「組織別番号」を利用する。
  7. ある組織が他の組織から情報を要求する場合には、認可サーバから当該データを取得するための「許可番号」[6]を取得し、これを使ってデータを要求する。情報提供組織はこの「許可番号」を認可サーバに提示し、誰のデータを提供すればよいのかを知り、当該データを提供する。
  8. 原則、データは主担当組織のみが持つことにし、各組織は必要に応じて取得、利用、その後速やかに廃棄する。

こんなところですかね。

これの何が良いかというと、

  1. ある組織がお漏らししても、そのデータは他の組織が持つデータと結合することはできない。つまり、プライバシーインパクトが低いので、コストが安く済む。
  2. お漏らしした組織の「組織別番号」を変更しても、他の組織には影響ないので、いくらでも変更可能。これも、コスト安につながる。
  3. お漏らしされたデータそのものは、検索エンジン等に引っかからないようにできる。また、回収も楽。これなんざ、今は望むべくも無いですね。[7]
  4. 「番号」は定期的に変わるので、これを使って、過去と現在を結びつける異時点間名寄せによる「無情社会」[8]を生みにくい。これもコスト安につながる。
  5. そもそも、各組織は自分が主担当のデータしか持っていないので、現在のように各組織がデータをコピーして持っている場合に比べて、データ漏洩時のプライバシーインパクトが低い。

なんと、いいコトだらけじゃないですか。

え?「こんなことしたら、システムが大変?1億人を収容するような大規模なシステムじゃ動かない!」ですって?何をおっしゃいますやら。これって、インターネットが動いてる仕組みそのものですよ。その世界では「1億人は小規模」なんです。そりゃー、変な仕組みにしたら動かないでしょうよ[9]。でも、GoogleとかFacebookとかがやっているような、JSON/RESTアーキテクチャなら大丈夫ですよ。ちゃんと設計すればね。

ちなみに、実は「マイナンバー」のシステムにはかなりこの考えが入っています。情報提供ネットワークのところとか。肝心の「マイナンバー」自身が「原則不変」なところがあれですが…[10]。たぶんこれは政治的な話で、システム的には変えられるようになってると思いますよ。うん、きっと。

【脚注】

[1] 郷原信郎 『「流出した基礎年金番号は変更」「変更通知は郵送」で本当に大丈夫なのか』(2015/6/9), ハフィントン・ポスト, http://www.huffingtonpost.jp/nobuo-gohara/nenkin-number_b_7540210.html

[2] 崎村夏彦『「番号」は漏れると危ないのか?』(2015/6/9), @_Nat Zone, http://www.sakimura.org/2015/06/3038/

[3] 日じゃなくて、本当はせめて秒だけど。

[4] EUでは一番最近と思われるeIDカードの発行にあたって、ドイツは「番号」を書面番号とした。したがって、再発行で変わる。これはとても正しい。

[5] これ、米国国防総省の社会保障番号の利用ガイドラインでも基本そうなっています。ちなみに、「番号」を組織に渡すのもリスクだと考える場合、個人が「組織別番号」を取得して組織に渡す方式があります。SAMLのNameIdentifierとか、OpenIDのPPIDって、そういう仕組です。自動化されているので、個人は気づかないでしょうが。

[6] 専門的には、Access Token といいます。

[7] 悪意があって、「番号」を他のものに付け替えられたらだめですがね。

[8] 崎村夏彦『無情社会と番号制度〜ビクトル・ユーゴー「ああ無情」に見る名寄せの危険性』(2010/12/13), @_Nat Zone, http://www.sakimura.org/2010/12/686/

[9] エンタープライズなXML/SOAPシステムとかね。あれは、せいぜい200万人とか向けのシステムですから。XMLベースだと、余計なデータと演算が多くなっていけません。あれで1億人やるのは大変…。

[10] あと、各組織(雇用者、金融機関など)がマイナンバーを保存してまうとかも、あれだなぁ…。

 

WAYF NewsBetalingsmodel [Technorati links]

June 10, 2015 11:48 AM
There are no translations available.

Institutioner som er tilsluttet WAYF som identitetsudbydere, skal bidrage økonomisk til driften og udviklingen af systemet.

For institutioner som aftager DeICs forskningsnet, er bidraget til WAYF indeholdt i deres betaling for forskningsnettet.

Alle andre institutioner skal hver betale en titusindedel af deres ordinære driftsomkostninger som bidrag til WAYF. Opkrævningen sker en gang årligt og baserer sig på hver institutions senest offentliggjorte årsrapport.

For det kalenderår hvori en institution tilsluttes WAYF, betaler institutionen kun for de kalendermåneder hvor tilslutningen er gennemført. Ved gennemførelse af tilslutning i marts måned betales eksempelvis kun 10/12 af prisen for det pågældende kalenderår.

Tjenesteudbydere skal ikke betale for at have tjenester tilsluttet WAYF.

WAYF NewsWAYF introduces proxy IdPs in interfederation metadata [Technorati links]

June 10, 2015 11:34 AM

WAYF now introduces distinct entities for its connected IdPs in the Kalmar2 and eduGAIN metadata feeds, enabling SPs to connect to Danish IdPs directly, in a peer2peer manner, e.g. enabling an SP like the GEANT Intranet to connect directly to the Technical University of Denmark, through eduGAIN.

Being a hub&spoke federation, Danish WAYF has until now published just a single IdP in Kalmar2 and eduGAIN metadata, namely the WAYF hub IdP. The real IdPs behind the WAYF hub thus have so far not been accessible as seperate entities, making it technically impossible for a number of non-Danish SPs to connect directly to their Danish customer institutions through eduGAIN and Kalmar2.

This situation is now changing:

With these proxy IdP entities published in the Kalmar2 and eduGAIN metadata feeds, WAYF can now, in the context of interfederation, be thought of as any other peer2peer federation, breaking down both technical and mental barriers.

Over the coming week WAYF will gradually publish its almost 100 IdPs to Kalmar2 and eduGAIN — this to avoid potential problems with large, sudden increases in the number of IdPs and to be able to contain any problems that may arise during the operation.

As we publish an entity specific certificate — based on the same common key — for each entity, (older versions of) ADFS-based SPs are supported.

Ludovic Poitou - ForgeRockMy views on ForgeRock Identity Summit [Technorati links]

June 10, 2015 08:22 AM

View of the ocean from Half Moon BayTwo weeks ago, I was in the mist of Half Moon Bay, attending the ForgeRock Identity Summit. This is the 3rd conference in the US and each year, the event becomes bigger, nicer and better. The location itself was amazing, sitting on the edge of the Pacific ocean, rocked (or lulled?) by the sounds of the waves.

ForgeRock User Group AttendeesOn the day before the main conference, we hosted a ForgeRock User Group, very well attended and had the opportunity to exchange with our customers, future customers and users about our product directions, and their experiences deploying the products. I’d like to thank the attendees for the great discussions, the sharing, and the excellent feedbacks that are definitely going to translate into product features and enhancements.

I was planning on writing a summary of the conference, but my coworkers did such a good job at it, that I encourage you to read their recap of Day 1 and recap of Day 2. So I leave you with my usual visual summary of the ForgeRock Identity Summit 2015, and all the photos that I’ve taken during the event.

18046252043_3aa9d0f7d3_m


Filed under: General Tagged: community, conference, ForgeRock, identity, IdentitySummit, user-group

Matthew Gertner - AllPeersInterview with Achim Neumann of A. Neumann & Associates [Technorati links]

June 10, 2015 05:23 AM

Today we’re lucky enough to have an interview with Achim Neumann, Owner and President of A. Neumann & Associates, LLC, a premier mergers & acquisitions firmed headquarters in New Jersey with offices and representatives all over the Northeast United States.

Achim Neumann of A. Neumann and Associates LLC

So Achim, Where were you born and where did you grow up?

I was born in 1956 in Germany, close to Hamburg in the northern part of Germany. Originating out of an entrepreneurial family in the manufacturing field, with a company of 100+ employees in post war Germany, I was very early exposed to the philosophies of creating and managing a successful business – something of great value in relating to our clients today and their day-to-day challenges

Where did you study for college and what did you study Achim?

There are three significant phases of learning in my life.

First, I was fortunate in participating in a two year vocational program with an international commercial bank right after high school. Such programs are somewhat unknown in the US, however, they are very common in Europe and have great value, as they provide an early opportunity for a young person to be exposed to all aspects of banking. My education included the typical banking retail operations in various banking branches, lending, operational aspects, and stock brokerage investments.

The second part of my education consisted of the Liberal Arts degree, Bachelor, at Columbia University, with focus on economics in 1982. This was somewhat a natural extension of the previous education, and I was able to obtain the Prize for the best economics student in my class. More importantly, the Liberal arts degree gives me a very broad education, something I am very thankful for these days.

Finally, in 1983, I completed an MBA degree at The Wharton School, University of Pennsylvania, considered the leading MBA program in the country, if not the world. At that stage, my focus was on Marketing, rounding up the previous experience in banking and economics.

When did you come to the states and why did you decide to stay in the states?

I relocated from overseas in 1979, and have been living in the US for the past 35 years, mostly in the North East (although, initially I lived for two years in Atlanta, GA).

I have thoroughly enjoyed my relocation, have developed many personal and professional relationships, and see great potential in our market here, in terms of future growth, versus the European market.

Even though it sometimes appears difficult to a business owner in the US to appreciate, the regulatory environment is considerably more stringent in Europe. Furthermore, the entrepreneurial spirit is so much more intrinsic here, consistent with the relatively short life of this nation.

What led you to open up Neumann and Associates?

ANA was a logical extension of my past experiences and education: not only was I employed by Siemens, a Fortune 500 company, for close to ten years, I had also managed two start-ups earlier in my career. One company was focusing on the security market, the other one was a jazz music labels.

Having been involved in both companies and experiencing the challenges a small business faces, gave me a thorough understanding of many of our clients today, and it is quite complimentary with my upbringing in an entrepreneurial environment.

Combining such experiences in small, midsize and large companies with the educational background is a perfect combination, and most certainly reflecting the success of A. Neumann & Associates, LLC covering the North East with a client base of 100,000 companies.

What are some of your hobbies?

As time has progressed, my hobbies have changed.

Whereas I had been very active in the yachting scene for many years (at one time training for the Olympic sailing team), having won many offshore yachting racing events and having managed crews of 15+ people, these days I enjoy taking my small motorboat out with my wife and kids over the weekend in the New York area.

However, I do also enjoy cross country motorcycle trips with my wife, and over the past ten years we have put more than 150,000 miles behind us, exploring close to all of the individual states in the country.

Always coming along on our trips is my extensive camera equipment, allowing me to consistently expand on my photo experience – having started 35 years ago with a simple photo journalism class at Columbia University.

Once again, we’d like to thank Achim Neumann for taking the time to answer All Peers.com’s questions and stay tuned for more interviews with influential business leaders.

The post Interview with Achim Neumann of A. Neumann & Associates appeared first on All Peers.

Matthew Gertner - AllPeersGreat Ways to Save on Your Satellite Television Service [Technorati links]

June 10, 2015 04:53 AM

How can you save on your satellite television service? ... photo by CC user Loadmaster  on wikimedia

I love watching television as much as the next person. Being able to catch up on some of my favorite television shows throughout the week, and even watching a flick or two with the family on the weekends can be a great pastime.

However, if you’re like me, you too have a busy schedule that doesn’t allow you to watch television hour after hour. I probably watch a total of ten hours of television (on a lucky week), yet the subscription services can often be kind of costly. Rather than ditch the television service altogether, I decided to switch from cable to satellite services while still looking for huge savings.

Here are a few options you might try to save on your satellite television service at home:

Bundling Services

One option that satellite television service providers have is bundling. This essentially means that you’re able to package your television, internet, and phone services into one for a discounted price. By choosing a bundle package, consumers can save as much as 10-20% on their monthly bill.

However, since satellite television service providers don’t have their own internet and phone services, you will have to receive the bundled discounts through their partnering service providers. There are several for you to choose from so that you can get the landline features and internet speed you need.

Compare Packages

Another way to get your satellite television bill down is to compare the various packages that are offered. Not only should you compare packages between various satellite subscription providers, but you should also compare packages within each company.

Each service provider has several packages (generally a basic, premium, and platinum package). Review each of the packages available to see which one will give you the best options for landline features, internet speeds and channel line ups. Visit tvlocal.com to go over the various packages on offer and choose which one will work best for your entertainment and communication needs.

Ask About Specials

Companies are looking to gain new customers on the regular basis. Therefore, if you really want to get a good deal, contact the company directly to find out what types of specials they might be able to offer you. Sometimes, you’ll find that a customer service representative is willing to offer you more discounts simply to get you signed up as a customer.

Upgraded Technology

Generally with a cable subscription service, you’ll need to have a cable box for every television set you have available. This can cost you an additional rental fee for each box you have. However, satellite service providers have new technology that will allow you to purchase wireless boxes that can connect and be used for multiple television sets. Several satellite television providers also give a huge savings for new customers looking to purchase the latest technology.

If you’re looking for convenient yet affordable ways to save money on your monthly television subscription services, these ideas will certainly help you save a bundle. When choosing the best television service provider, be sure to also compare things such as overall value, channel lineup, and features to get the best bank for your buck.

Now you can keep up with all the latest television shows and movies without having to break the bank. Here’s to binge watching and comfort foods. Enjoy.

The post Great Ways to Save on Your Satellite Television Service appeared first on All Peers.

June 09, 2015

Ian GlazerIdentity is having its TCP/IP moment [Technorati links]

June 09, 2015 04:00 PM

[This is my keynote from Cloud Identity Summit 2015. Unlike most of my talks, this one did not start with a few phrases and then an outline and then a speech and then a deck. This one dropped out of my noggin in basically one whole piece. I wrote this on a flight back home from London based on a conversation with a friend in the industry. Oh, there is no deck. I delivered this as a speech.]

[Credit where credit is due: Josh Alexander gave me the idea for the username and password as cigarettes and the sin tax. Last year, Nat Sakimura around 2 in the morning in my basement talked about service providers charging for username and passwords to cover externalities, and I completely forgot about the conversation. Furthermore, at the time, I didn’t fully track with his idea. I totally get it now and want to make sure I assign full and prior art credit to Nat – the smartest guy in identity, sent from the future to save us all.]

 

 

Remember when we used to pay for a TCP/IP stack. Remember when we paid for network stacks in general? Hell, we had to buy network cards that would work with the right stack.

But think about it… Paying for a network stack. Paying for TCP/IP. Paying for an implementation of a standard.

How quaint that sounds. How delightfully old school that sounds.

But it was. And we did.

And now? No one pays for a TCP/IP stack. Or at least no one pays for it directly. I suppose you can say that what you spend on an OS includes the cost of the network stack. It’s not a very good argument but I suppose you can make it.

When network stacks became free (or essentially cost free) networking jobs didn’t go away. I would posit that we have more networking engineers now than we’ve ever had before. Their jobs morphed with the times and changes in tech.

It’s mid-2015 and I think we need to admit as that the identity industry now looks a lot like the networking industry did back then. The standards are mature enough. The support for them is broad enough. Moreover, not taking a standards-based approach is antithetical to the goals of the modern enterprise.

Simply put, identity is having its TCP/IP moment.

Going through our TCP/IP moment has three implications:

  1. Not being standards-based is officially on the wrong side of history
  2. The business model for identity will change
  3. We as a profession and as an industry are not under threat from our TCP/IP moment

I am going to explore the first two of these implications so that you can better understand the third – that although great change is ahead, we need not be afraid of that change.

Not being standards based is the wrong side of history

If you do not support federation standards, you are on the wrong side of history. If you do not support standards based user provisioning, you will soon be on the wrong side of history. You are the Banyan Vines of identity. You are the LU6.2 of identity. And if you are newer to technology and haven’t heard of either Banyan Vines or LU6.2, then I rest my case.

What I said last year continues to be true: our identity standards are more than capable for the vast majority of use cases. Standards for federated single sign-on and attribute distribution are especially strong. Historically user provisioning has not been great but it is about to get much better with SCIM 2.0. Authorization, in the form of XACML and its related profiles, is robust and capable and its adoption curve ought to be bending upwards. Things like UMA and Minimum Viable Consent Receipt provide coverage for underserved and emerging use cases.

And it isn’t just that there are standards to be used. We have seen good work in conformance testing of those standards. Conformance serves as a testing tool for technology providers and a mechanism to remove risk for a technology selection for enterprises. For those of you who missed it, the OpenID Foundation released an open-source conformance test for OpenID Connect. This is an important milestone along OpenID Connect’s path of maturation.

So not only do we have the standards but we also have conformance testing, in at least some places. And yet service providers and software vendors aren’t necessarily using those standards.

Case in point, there’s a popular instant message system used by development teams. It doesn’t support SAML. It doesn’t support OpenID connect. Its user forum has dozens of pages on this topic: please support SAML, please support OpenID connect. The comments almost consistently read “please please please… If you support SAML, my enterprise would adopt this tool right now. But we can’t if you won’t.”

My reaction to threads like this, to products like this, is “why do you hate your users?” Comments and feedback from customers are gifts. And yet many service providers do not acknowledge those gifts. When you have prospects telling you, “please add these standards that make me (and you) safer and more efficient,” I can see no reason not to add standards.

A pathetic counter-argument is “our service isn’t enterprise only and non-enterprise users need a way to create a username and password.” My retort is “why do you hate your customers?” Why do you make individual users create yet another username and password? At the very least, you ought to be supporting 3rd party credentials. It’s a step in the right direction; it is a step towards supporting identity standards. And the individual user wants this… so long as they get the appropriate privacy assurances and protections.

Another retort I have is “do you like being a toxic waste farmer?” Holding username and passwords, holding non-federated accounts makes you a toxic waste farmer. Most people don’t want to nor have the ability to safely be a toxic waste farmer. Does your line of business peer understand the risks? Does your Board understand the risks of being a toxic waste farmer? Do your investors?

For some of us, holding username and password data is a cost of doing business. Our businesses require significant investment to protect that information appropriately. We stake the trust of our brand on attempting to do as best as we can with that data. But, let me be very clear, we should be the exception and not the norm.

I have said it before and I will say it again – if your service provider does not support standards-based identity services, they are not acting in your best interest nor the best interest of your customer. There are one of two reasons why the service provider is not implementing standards-based identity. They might be simply unaware that there are standards to use and libraries available to do so. I have a hard time believing that service providers are ignorant to identity standards in this day and age, but I suppose it could be true. And if that is true, then that is on us as an identity industry to do a better job with making it easy to adopt standards.

The other reason why a service provider doesn’t support standards-based identity: they are sociopathic. Not supporting identity standards makes you a S-SaaS – sociopathic software as a service provider. And we want no part of you.

The business model for identity is going to change

The whole of the business model for identity is going to change after our TCP/IP moment. This change will affect every player in the identity market: enterprise customers, individual consumers, technology suppliers, service professionals, and industry analysts.

Enterprise customers will expect to have standards built in. No one expects to have to install a TCP/IP stack in their virtual machine and no one will expect to have to install SAML or OAuth in their identity services.

Enterprises expect products that reduce risk. Standards reduce deployment and operational risk. Ergo, enterprise will expect identity standards as a natural part of the services the deploy and consume.

Identity technology suppliers simply cannot and will not be able to charge for standards-based identity. That is essentially asking your customers to pay for risk to be removed. In fact, it sounds like extortion. “Nice IAM project you got here. It would be a shame if something bad were to happen to that SSO process.”

Where can identity technology suppliers charge is for the new and the novel? Have an amazing context-based authentication and recognition system? Great; charge for that. Deliver an amazing user experience and raise identity assurance? Awesome. Charge for that.

But what about technology suppliers other than identity technology? What about service providers for collaboration, content management, workforce automation? Their business model will have to change after our TCP/IP moment as well. Enterprises will expect identity standards-based endpoints. Our collective stomach for custom integrations will be gone. Support for SAML and its kin can no longer be a “for-fee” feature.

And individual consumers will demand more of technology suppliers. They may not know that they are asking for standards-based identity services, but they will be asking for the ability to use 3rd party credentials at each of their service providers. Do not make us create yet another username and password.

But technology suppliers can continue to make money on identity, just not in the way they expect. They can charge for extended support of bad practices. If the customer really really really wants to use username and passwords, a practice that increases risk for both customer and the supplier, then make the customer pay for it. Yes, that’s right; charge people to continue to use username and passwords. It is directly akin to charging for extended support on technology and products that have passed their prime. If you want to still use Microsoft XP, then pay for extended support. You want to use username and password (and no other factors), then you will have to pay for it.

Maybe the extended support analogy doesn’t work for you. Let’s try another – a sin tax. What if we treat username and password use like smoking? You want to light up another username and password? You are free to do so but you have to pay the service provider’s levied sin tax – the revenue goes to mitigating the risk for you and service provider. So enjoy that methylated username and password but be prepared to pay a premium.

What about service professionals? How will their business model change after our TCP/IP moment? The low value plumbing and connectivity parts of their business will be worth even less. Universal standards-based connectivity removes the heavy lift from integration and in doing so removes risk. Professional services companies know this and it isn’t a threat to their business. They want to be part of the higher value business process integration and best-practices advisory business. Our TCP/IP moment is further encouragement to do so.

But identity consumers and suppliers, along with their professional services peers, aren’t the only actors in our world. The way the identity market is studied and judged by industry analysts has to change too.

First, analysts will need new ways of measuring identity companies’ success other than strictly looking at revenue. As a peer said years ago, “If Sun can get bought and their identity products shut down, and if HP can exit the market, then it can happen to anyone.” Sun and HP are proof points that revenue and viability are not so closely correlated. Analysts must measure our markets based on the quality of a service and the reach of that service. Focusing on revenue and profitability are increasingly irrelevant, especially as platform providers offer identity services as part of their service dial-tone.

Second, analysts need to give room for innovation instead of expecting every identity vendor to fit in their preconceived notion of what an identity solution is supposed to look like, how it should be marketed, and how it should be measured. And they need to do so for their customers’ sake. An analyst who shall remain nameless, pointed out to me “any time a company tried to deliver identity services in a new way, with a new price model, associated with its larger business and services, they were punished by the market – especially analysts.”

Speaking as an ex-analyst, it is so much easier to deal with a new identity company by putting them in a box with similar companies. “Oh, I see, you are like Ping. You are like SailPoint.” But this approach isn’t fair to either the new company or the one I compared it too. Most importantly, it wasn’t entirely fair to my customers. It is high time the shape of our identity solutions change and the TCP/IP moment will cause this. Analysts must leave aside the notion of what identity solutions look like, especially as you consider those notions are a decade plus old.

There is no threat to our industry and our profession

Before the TCP/IP moment you were a Netware gal, an AppleTalk dude, a token ringer. And things changed. And you adapted. You find your skills applicable as an AD admin, an eDirectory guru, a firewall jockey, an application delivery specialist.

Identity is at its TCP/IP moment. And it is the best time ever to be in the industry ever.

What we must and will show the business is that after standards-based identity is in place then new opportunities appear and existing processes can be done more easily. Having the risk of the project mitigated through the use of standards, you are free to be creative. Take your knowledge and help the business see that every transaction is an identity transaction. Every business interaction is a relationship.

Identity is the key to growth in our organizations: further reach, higher value, better experience. But to be a part of this growth, we have to plug our ears to the siren song of audit-centric identity and thinking our identity processes are unique special snowflakes. Instead we just need to aid our peers as they serve our ultimate customer.

External identity. Customer identity. Consumer identity. It goes by many names. Eventually it will go by just one name: identity. Customer identity is our future. Being a crucial part of a customer identity project will require learning new languages, aiding new peers, and frankly being a little bit uncomfortable from time to time. We as an industry do not have a template, a road map, or a reference architecture for customer identity. We are still scouting the territory, finding our way.

But we, fellow identity professionals, are the best qualified to serve as guides for the business. We have learned how to use standards to make projects less risky and make business processes more efficient. We’ve learned a bit about user experience in our misadventures with passwords and have tested our conclusions with 2 factor and adaptive auth. We’ve become data custodians along the way; understanding the importance of the data we hold and respect the custodial nature of being an identity management professional.

We are keepers of identity. Employee. Partner. Customer. Citizen. And in that we represent the key to growth for our organizations.

My vision for a post-TCP/IP moment world

Let’s pretend for a moment that you’ve bought into this idea of identity’s TCP/IP moment. What then does our world look like the moment after we’ve had our TCP/IP moment?

First, basic standards-based identity services are freely available. For everyone. In every industry. On every tech stack. In every cloud. The ability to emit and consume standards-based identity is as natural and as easy as using TCP/IP. This removes unnecessary risk from our projects. It removes an unneeded distraction of establishing connectivity along which awesome will flow.

Second, service providers will provide standards-based end points. Then they will offer ability to use 3rd party credentials. Then they will allow username plus an unphishable token. In that order.

And then, and only then, will they offer username and password… for a fee.

You want the equivalent of LU6.2 support? Get out your wallet. If you want the equivalent of Banyan Vines, then you must pay for extended support.

Third, having been freed of most of our connectivity concerns, amazing things happen. Identity technology vendors can focus more efforts on novel context-based authentication and recognition systems. They can obsess over delivering amazing customer-facing user experiences. Meanwhile, identity consumers use identity services like IDEs of awesomeness. They will deliver valuable relationships via immersive user journeys.

Identity is having its TCP/IP moment. Soon thinking of paying for standards based identity connectivity and services will seem as quaint and as outdated as paying for an implementation of a TCP/IP stack. It is a natural and necessary change in our market.

This change may not be comfortable but it doesn’t pose a risk to us as identity professionals. We have never been needed as we are needed now. For we are the keys to growth for organizations, and for ourselves. We are the keepers of identity, and this, this is our moment.

Nat Sakimura個人情報保護法案(含むマイナンバー法案)、当面採決見送り [Technorati links]

June 09, 2015 02:40 PM

6月9日のNHKニュース曰く

マイナンバー法などの改正案を審議している参議院内閣委員会は、9日、理事懇談会を開き、日本年金機構のシステムから大量の個人情報が流出した問題を受けて、状況の推移を見極める必要があるとして、改正案の採決を当面、見送ることで与野党が一致しました。

マイナンバー法案とここでは言っているが、正式には「個人情報の保護に関する法律及び行政手続における特定の個人を識別するための番号の利用等に関する法律の一部を改正する法律案」[1]であり、個人情報保護法の改正とセットである。したがって、個人情報保護法の採決も先延ばしとなってしまったのであった…。さてさて…。

[1] http://www.cas.go.jp/jp/houan/189.html

Radovan Semančík - nLightOpen Source Identity Ecosystem Idea [Technorati links]

June 09, 2015 02:15 PM

Significant part of open source software is developed by small independent companies. Such companies have small and highly motivated teams that are incredibly efficient. The resulting software is often much better than comparable software created by big software vendors. Especially in the Identity and Access Management (IAM) field there are open source products that are much better than the average commercial equivalent. And the open source products are much more cost efficient! This is exactly what the troubled IAM field needs as the closed-source IAM deployment projects struggle for better solution quality and (much) lower cost.

It is obvious that small independent open source companies can deliver great software. But the usual problem is that such a software created by a small company is a "point solution". Such product is a remarkable tool to solve very specific set of problems. But no small company really provides a complete solution. Every engineer know what it takes to integrate products from several companies. It is no easy task. So, this was an obstacle for the open source IAM technologies to reach the full potential. But this obstacle is a thing of the past. It does not exist any more.

Several open source IAM vendor joined together in an unique cooperative group that has a working name "Open Source Identity Ecosystem". This includes companies such as Evolveum, Symas and Tirasa. The ecosystem members agreed to support each other during activities that involve product integration. The primary goal of the ecosystem activity is to create and maintain a complete IAM solution (or rather a set of solutions) that will match and surpass all the closed source IAM solution stacks.

The ecosystem is much more than yet another technology stack. The ecosystem is a completely revolutionary concept.

A stack is usually simple set of products piled on top of each and roughly integrated together. E.g. if a customer needs identity management component from the stack he usually has only one option. The freedom of choice is severely limited. This leads to vendor lock-in, lack of flexibility and a very high cost.

But ecosystem is different. The ecosystem adds a whole new dimension. There are several options for each component. E.g. If a customer needs an identity management component from an ecosystem there are several options to choose from: Apache Syncope supported by Tirasa and midPoint supported by Evolveum. There is no vendor lock-in. If one of them fails to meet the expectations there is always a second choice. Evolveum and Tirasa are competing companies, yet they have agreed on a common set of interfaces to make crucial parts of their products interoperable. Therefore both products can seamlessly live in the same ecosystem. But the internal competition still keeps the incentive for both products to evolve and improve. This concept provides a completely new experience and freedom for the customers. It also brings enormous number of new opportunities to system integrators, value-added partners, OEM-like vendors and so on.

The ecosystem is completely open. If you like this idea you can join the ecosystem. This can be especially attractive for companies that maintain open source projects in the IAM field. But also open-source-friendly system integrators and service providers are more than welcome. Please see the discussion in the ecosystem mailing list for more details.

(Reposted from https://www.evolveum.com/open-source-identity-ecosystem-idea/)

Nat SakimuraMicrosoft Azure や Dropboxが、クラウドプライバシー コントロール国際基準 ISO/IEC 27018 に準拠 [Technorati links]

June 09, 2015 09:18 AM

Microsoft Azure が、クラウド唯一のプライバシー コントロール国際基準 ISO/IEC 27018 [1]に準拠した初のクラウド コンピューティング プラットフォームとして確認されましたらしい。認証はBSIがやっているそうだ。しかも今年の2/16と旧聞。見てたかもしれないが、流していたのだな。

さらに、今週気がついたのだが、Dropbox もまたISO/IEC 27018認証を取得しているらしい。BSI大忙しですな。JIPDECさんもやらないのですかね。Pマークがあるから無理か?

ISO/IEC 27018購入ページ

ISO/IEC 27018購入ページ。PDFだけでなく、ePub版もあるのが便利

ISO/IEC 27018 というのは、ISO/IEC 27002 がカバーしていないプライバシー部分を、ISO/IEC 29100 のプライバシー・フレームワークに沿って足しているものだ。対象は、ISO/IEC 29100 でいうところの PII Processor、いわゆる「委託先」である。委託先ではないデータコントローラを対象にする規格は、ISO/IEC 29151として策定が進んでいる。実はISO/IEC 27018は、策定が始まるところから日本の委員はもとより、国際委員みなで「びみょ~」「いるのか?クラウド特有のなんて無いだろ。」と言いながら、「まぁ、27017でセキュリティをやるならそれとセットで整合性のために」スタートした規格だ。ナイロビ会合でしたかねぇ。審議はSC 27/WG 5(私が国内主査をしているWG)でやっていた[1]のだが、まぁ、あまりやることがないので非常に高速にとっとと決まったという経緯がある。さらに、全体の枠組みとしては上述の29151が担当なので、そこが終わらないうちにやるのはどうかという話もある。なので、「うちはISO/IEC 27018対応!」とか言われると「びみょ~」という気分になるのだが、それでももちろんやらないよりは良いので…。

 

Microsoft Azure Japan Team Blog (ブログ) です。このBlog (ブログ) は Microsoft Azure に関する最新情報や、開発に役立つ情報を提供します。

情報源: Microsoft Azure が、クラウド唯一のプライバシー コントロール国際基準 ISO/IEC 27018 に準拠した初のクラウド コンピューティング プラットフォームとして確認されました – Microsoft Azure Japan Team Blog (ブログ) – Site Home – MSDN Blogs

dropbox-27018

[1] ISO/IEC 27018 Information technology — Security techniques — Code of practice for protection of personally identifiable information (PII) in public clouds acting as PII processors

[2] 国内委員会の主担当はHPの佐藤さん。