May 29, 2015

Rakesh RadhakrishnanTrending Towards Threat Centric ESA [Technorati links]

May 29, 2015 04:41 PM
When I started this blog title "Identity Centric ESA" almost a decade back, my motive was to highlight how Enterprise Security Architecture - with the notion of hyper distribution in IT (Cloud models and Mobile end points) has to evolve from "Network Centric Security Architectures" heavily focused on Perimeter defense  (outside in) and "Application+Data Centric Security Architectures" heavily driven by Compliance requirements (inside out) has to evolve into one that is "IAM Centric" - Identity & Access Management, with the advent of SAML, XACML, OAUTH, etc. Integrated Identity Intelligence has become pervasive in Cloud Stacks today and in Networking (such as Cisco ISE), essentially maturing the ESA models that pervaded in the decades of 80' and 90's. Now in 2015 - 2020 it is obvious that ESA will be driven by a Threat Centric Model (like Cisco's slogan - Threat Centric Security) and by that what I mean is the following:

  1. NG SIEM tools with Big Data technologies will be Threat Intelligence Aware and Risk/Behavior Intelligence Aware (making them STIS - Security Threat IN Systems)
  2. Threat Analytics with Big Data Technologies will feed this Threat Intelligence (STIX) to all Security Controls
  3. This Intelligence driven ESA - will involve Real Time Response (actionable intelligence) and be fine grained in terms of recommended set of actions
  4. Threat IN will be integrated into the IAM stack for "design time", "provision time" and "run time" IAM control response
  5. Threat IN will be integrated with Network Security controls and Application+Data Controls (pervasive integration)
  6. This creates the full loop integration that is required in security systems and is IN (intelligence driven)
  7. Policy Automation and Dynamic Policy Generation+Combinations will mature the Enterprise Security Architecture to respond real time (hence STIX- XACML specs) 
All industry verticals (health care, transportation, banking and more) will benefit from this NG maturity model in Enterprise Security Architecture (watch out for a book on Threat Centric ESA in 2015). Threat Centric model is a natural evolution from the earlier "network centric" or "identity centric" models and builds on top of those centric approach. Similar to the notion of what is the center of a spherical ball..  from a multi-dimensional thinking Threat Centric model actually leverages the earlier models for a better level of capability and maturity. It will be awesome to watch this take shape rapidly between 2015 and 2020!
May 28, 2015

Ian GlazerStop Treating Your Customers Like Your Employees [Technorati links]

May 28, 2015 09:36 PM

Unlike many of my other talks, this one didn’t start are a speech and didn’t start with a few phrases. This talk started as an analyst briefing deck. It had become clear that many of the identity industry analysts, if they covered customer identity at all, did so with a very narrow view of it. I put the progenitor of this deck together so show how broad customer identity is and, more importantly,  how amazingly large the opportunity ahead of us is.

Speaking  season came upon me and I needed something to talk about. I took out all of the Saleforce-specific bits and turned the briefing deck into the keynote below.

The gist is simple: customer identity presents the opportunity to grow the business and move identity professionals from being in a cost center to being in a revenue generation center. We, identity professionals, can be business enablers, something we have never been before.  But, and this is a big one, customer identity is larger than employee identity and applying enterprise-centric techniques to customer-centric use cases is a major mistake. What follows is my attempt to show big the world of customer identity really is.

Customer identity is an amazing opportunity for identity professionals everywhere. Don’t treat your customers like your employees. Start delighting them.

Stop treating your customers like your employees from iglazer

Ian YipIdentity needs to disappear [Technorati links]

May 28, 2015 11:49 AM

The disappearing machine
Photo source: Paul Chapman - The disappearing machine
In recent years, security vendors, including ones that don't sell Identity & Access Management (IAM) products, have been pontificating about how identity needs to be the focus for all things security. They (my current and previous employers included) continue to be on-message, each beating everyone to death with their own version; identity-centric-security, identity-powered-security, identity-defined-security, identity-is-the-perimeter, identity-is-the-foundation, identity-is-the-intelligence, and on and on.

Yeah, we get it. Identity is VERY important. Enough already.

The problem with rolling out the same message for years is that people stop listening. It's like the age old line in press releases: "the market leader in"; sure you and every other vendor out there. The market leader. Yeah, right.

Ok, so I'm being a little cynical. But the fact that as an industry, we've had to go all broken-record on this means:
  1. We've not been very effective in explaining what we mean. AND/OR
  2. No one gives a crap.
The truth is probably a combination of the two.

From the 10,000 foot marketing message, we have a habit of diving too deep too quickly, skipping the middle ground and heading straight into explaining, debating and architecting how everything needs to hang together. For example: "You need to federate between the identity provider and service providers using standards like SAML, OAuth or OpenID while maintaining a translatable credential that can be trusted between partner domains. Which OAuth do you mean? 1.0? 2.0? Can't we just go with OpenID Connect? Doesn't that cover the use cases? We're effectively supporting OAuth right?"

Errr, yeah. Sure. Hey, architect person, I'm not entirely sure what all that means, but we do that, right? And why do we do that again?

We often explain the "why should we care" answer by saying "you need security because you do, and identity is the key". And therein lies the problem. The "why should we care" question is difficult to answer in a meaningful, tangible way.

In addition, the reasons tied purely to security and risk no longer resonate. It's arguable that they ever did at all, but we could always pull out the audit, risk and compliance stick to metaphorically beat people with (oops, did I say that out loud).

Today, we often pull out the data-loss card. But we can do better:
Organisations should care about identity so they can stop caring about it. Identity needs to disappear, but only from sight; it needs to be invisible.
I'll explain in the next post.

KatasoftThree Quick Ways to Increase Customer Data Security [Technorati links]

May 28, 2015 12:30 AM

The world of user data security is vast, complicated, and for many teams, difficult to navigate. When working with a legacy application, it can be difficult to determine the first, easy steps to ensure your user and customer data is more secure. But a few quick tips can dramatically improve user data security in most environments. At Stormpath, user data security is our top priority, so we want to share a few ideas to help you upgrade quickly.

Step 1: Separate The User Store from Application Data

One of the first – and easiest – steps to increase customer data security in the cloud is to separate user credentials and personally identifiable information (PII) from application data. Separating the user store ensures that any data collected by or provided to your application is not easily matched to its owner. What you separate depends on the application’s use case, but typically separated user data includes usernames, email, passwords and PII such as addresses or geolocational data.

This separation of user data provides several benefits:

One of the typical use cases for Stormpath is to create a totally separate data store that runs on separate infrastructure, either in our public cloud, our isolated enterprise cloud, or on a private deployment. The separation of infrastructure increases user security even further – user data is less vulnerable to attacks on your core system and network.

Of course, don’t forget that user authentication data and PII should be protected and well-encrypted, both at rest and in transit, which brings us to the second step.

Step 2: Use a Strong Password Hashing Algorithm

We all know that user authentication data shouldn’t be stored in plaintext, but do we all follow that rule? By one estimate, 30% of companies store or transmit passwords in plaintext.

Employing an advanced hashing algorithm like bcrypt or scrypt makes hacking authentication data more difficult and more time intensive. Both of these algorithms are designed to take a long time to compute a hash in order to slow down brute force cracking attempts. Bcrypt, for example, uses a CPU-intensive algorithm to ensure password attacks require enormous computing power. Scrypt takes it one step further by requiring enormous amounts of memory to compute password hashes in addition to its high CPU requirements. Thus, attackers are forced to spend lots of time and money to attempt even the smallest of password cracking operations.

Last, remember to encrypt your backups and database dumps. It seems obvious, but forgetting this step introduces a common attack vector in cloud computing. If your backup process doesn’t involve AES256, you might have an issue. If you’re looking for a secure way to store offsite backups, you might enjoy using tarsnap (created and ran by the Colin Percival, the creator of scrypt).

We believe it’s faster to use Stormpath’s pre-built Password Securityand one of our 15-minute quickstarts than to roll your own password security. But if you must build it yourself, check out our blogpost on building Password Security the Right Way and our handy Developer Best Practices Video on the Five Steps to Password Security.

Step 3: Frequently Update Your Hashing Complexity

When was the last time you updated your password hashing algorithm’s complexity?

One of the most common attack vectors is password infrastructure that hasn’t been properly maintained.

All hashing algorithms will be broken over time, and as you can see from that chart, some commonly-used hashes are actually incredibly insecure. There are two ways to stay ahead of the curve:

  1. Make it part of annual plan to update your hashes annually by increasing the factor or entropy. Using bcrypt or scrypt gives you the ability to tweak the ‘complexity’ of your hashing algorithm (changing how long it takes to compute a hash) via a configuration option.

  2. If you have any infrastructure currently securing passwords with anything other than bcrypt or scrypt, upgrade them to bcrypt or scrypt immediately. To make this truly easy for you, here are some upgrade tutorials for Python and PHP. Lot of other examples can be found online.

At Stormpath, we update our hashing complexity every 6-12 months, and can help migrate from your legacy user store to Stormpath Password Security if you don’t want to build this yourself.

Mike Jones - MicrosoftJWS Signing Input Options Specification [Technorati links]

May 28, 2015 12:03 AM

IETF logoThere’s been interest being able to not base64url-encode the JWS Payload under some circumstances by a number of people. I’ve occasionally thought about ways to accomplish this, and prompted again by discussions with Phillip Hallam-Baker, Martin Thomson, Jim Schaad, and others at IETF 92 in Dallas, recollections of conversations with Matt Miller and Richard Barnes on the topic, and with Anders Rundgren on the JOSE mailing list, I decided to write down a concrete proposal while there’s still a JOSE working group to possibly consider taking it forward. The abstract of the spec is:

JSON Web Signature (JWS) represents the payload of a JWS as a base64url encoded value and uses this value in the JWS Signature computation. While this enables arbitrary payloads to be integrity protected, some have described use cases in which the base64url encoding is unnecessary and/or an impediment to adoption, especially when the payload is large and/or detached. This specification defines a means of accommodating these use cases by defining an option to change the JWS Signing Input computation to not base64url-encode the payload.

Also, JWS includes a representation of the JWS Protected Header and a period (‘.’) character in the JWS Signature computation. While this cryptographically binds the protected Header Parameters to the integrity protected payload, some of have described use cases in which this binding is unnecessary and/or an impediment to adoption, especially when the payload is large and/or detached. This specification defines a means of accommodating these use cases by defining an option to change the JWS Signing Input computation to not include a representation of the JWS Protected Header and a period (‘.’) character in the JWS Signing Input.

These options are intended to broaden the set of use cases for which the use of JWS is a good fit.

The specification is available at:

An HTML formatted version is also available at:

May 27, 2015

Mike Jones - MicrosoftJWK Thumbprint -05 draft addressing issues raised in Kathleen Moriarty’s AD review [Technorati links]

May 27, 2015 11:59 PM

IETF logoThis JWK Thumbprint draft addresses issues raised in Kathleen Moriarty’s AD review of the -04 draft. This resulted in several useful clarifications. This version also references the now-final JOSE RFCs.

The specification is available at:

An HTML formatted version is also available at:

Mike Jones - MicrosoftTightened Key Managed JWS Spec [Technorati links]

May 27, 2015 11:58 PM

IETF logoThe -01 version of draft-jones-jose-key-managed-json-web-signature tightened the semantics by prohibiting use of “dir” as the “alg” header parameter value so a second equivalent representation for content integrity-protected with a MAC with no key management isn’t introduced. (A normal JWS will do just fine in this case.) Thanks to Jim Schaad for pointing this out. This version also adds acknowledgements and references the now-final JOSE RFCs.

This specification is available at:

An HTML formatted version is also available at:

Vittorio Bertocci - MicrosoftAzure AD Development at //Build/ and Ignite [Technorati links]

May 27, 2015 04:58 PM

What a couple of weeks they’ve been!
I absolutely loved the chance to speak directly with so many of you. You are doing pretty amazing stuff with our dev libraries, and your feedback is key for ensuring that we keep delivering on the features you need.

If you want to catch up, below you can find links to the recording of the sessions I delivered during the two conferences. THANK YOU for the super nice evals and comments you gave me! Smile

WAYF NewsFredericia Maritime School has joined WAYF [Technorati links]

May 27, 2015 11:41 AM

Fredericia School of Maritime and Tech Engineering has just joined WAYF. Users here now have the ability to access WAYF-connected web services using their institutional accounts.

Julian BondBruce Sterling's Austin, Texas, SXSW keynote speech from 2014. "The future is about old people, in big... [Technorati links]

May 27, 2015 08:11 AM
Bruce Sterling's Austin, Texas, SXSW keynote speech from 2014. "The future is about old people, in big cities, afraid of the sky."
http://www.theatlantic.com/technology/archive/2014/03/the-future-is-about-old-people-in-big-cities-afraid-of-the-sky/284459/

Bruce Sterling's post today on tumblr of Austin, Texas in 2015. Photos of a city, floods, menacing skies. You can't see the old people because they're all in cars.
http://brucesterling.tumblr.com/post/120005089208
 'The Future Is About Old People, in Big Cities, Afraid of the Sky' »
And four other intriguing things: when otters attack, the objects of brain interfaces, WWI diaries, and a digital model of a piano.

[from: Google+ Posts]
May 26, 2015

Mark Dixon - OracleMay 1927 – Model T Production Ceases [Technorati links]

May 26, 2015 06:16 PM

On May 26,1927  Henry Ford and his son Edsel drove the final Model T out of the Ford factory. Completion of this 15 millionth Model T Ford marked the famous automobile’s official last day of production.

ModelT

 

The History.com article stated

More than any other vehicle, the relatively affordable and efficient Model T was responsible for accelerating the automobile’s introduction into American society during the first quarter of the 20th century. Introduced in October 1908, the Model T—also known as the “Tin Lizzie”—weighed some 1,200 pounds, with a 20-horsepower, four-cylinder engine. It got about 13 to 21 miles per gallon of gasoline and could travel up to 45 mph. Initially selling for around $850 (around $20,000 in today’s dollars), the Model T would later sell for as little as $260 (around $6,000 today) for the basic no-extras model. …

No car in history, had the impact—both actual and mythological—of the Model T: Authors like Ernest Hemingway, E.B. White and John Steinbeck featured the Tin Lizzie in their prose, while the great filmmaker Charlie Chaplin immortalized it in satire in his 1928 film “The Circus.”

I have never driven a Model T, but have always loved seeing those old cars in real life or in pictures, faithfully restored or heavily customized. Just for fun, here is a hot rod that originally was a Model T. My guess is that nothing but the “bucket” is original equipment, but who cares? Enjoy!

ModelT2

Mark Dixon - OracleTo the Moon and Back: We Can Do Hard Things [Technorati links]

May 26, 2015 05:15 PM

On May 25, 1961, President John F. Kennedy announced his goal of putting a man on the moon by the end of the decade.

Kennedy moon speech 1961

A brief excerpt of the speech:

I believe that this nation should commit itself to achieving the goal, before this decade is out, of landing a man on the moon and returning him safely to the Earth. No single space project in this period will be more impressive to mankind, or more important for the long-range exploration of space; and none will be so difficult or expensive to accomplish.

… in a very real sense, it will not be one man going to the moon–if we make this judgment affirmatively, it will be an entire nation. For all of us must work to put him there.

What a thrill it was of living through those years of incredible innovation, splendid courage and diligent work by so many people. As President Kennedy said, it was not just one man going to the moon, it was a nation united in effort to get that astronauts there and bring them back.

P.S.  I think the look on Lyndon Johnson’s face is priceless.  It is as if he were thinking, “What in the world has that guy been smoking? We’ll never do that!”

Mark Dixon - OracleHealthy Eating – Really? [Technorati links]

May 26, 2015 05:04 PM

Incorporating all the current health buzzwords in your diet doesn’t necessarily mean you are eating healthy:

Marketoonist

 

Tom Fishburne (aka Marketoonist) explains:

It’s a tricky time to be a food marketer. How consumers define what it means to be “healthy” is in flux. As a food marketing friend pointed out, consumers are increasingly prioritizing food purity over calorie count.

Chipotle is the poster brand for the current state of health positioning. They’re taking a leadership role in progressive stances like GMO-free and sustainable sourcing. And this obscures the fact that an average meal at Chipotle packs a whopping 1,070 calories, close to a full day’s worth of salt, and 75% of a day’s worth of saturated fat. A Chipotle burrito has more than double the calories, cholesterol, and grams of fat than a Taco Bell Supreme Beef Burrito.

It’s similar to soda makers that tout being “made with real cane sugar” or granola bars that are really glorified candy bars. There’s an aura of health that distracts from the actual nutritional picture. Researchers refer to this as a “health halo.”

Maybe the biscuits and gravy I ate for breakfast yesterday weren’t so bad after all!

OpenID.netEnhancing OAuth Security for Mobile Applications with PKSE [Technorati links]

May 26, 2015 11:48 AM

OAuth 2.0 is the preferred mechanism for authorizing native mobile applications to their corresponding API endpoints. In order to be authorized, the native application attaches an OAuth access token to its API calls. Upon receiving a call, the API extracts the token, validates it (checks issuer, lifetime, associated authorizations, etc) and then determines whether the request should be allowed or denied.

Of course, before the native application can use an access token on an API call, it must necessarily have first been issued that token. OAuth defines how the native application, with a user’s active involvement, interacts with an Authorization Server (AS) in order to obtain a set of tokens that represent that user and their permissions. The best practice for native applications leverages a version of OAuth called the ‘authorization code grant type’ – which in this context consists of the following steps

  1. Upon installation, the native application registers itself with the mobile OS as the handler for URLs in a particular scheme, e.g. those starting with ‘com.example.mobileapp://’ as opposed to ‘http://’.
  2. After installation, the native application invites the user to authenticate.
  3. The native application launches the device system browser and loads a page at the appropriate AS.
  4. In that browser window, the AS
    • authenticates the user. Because authentication happens in a browser, the AS has flexibility in the how & where the actual user authentication occurs, i.e., it could be through federated SSO or could leverage 2 Factor Authentication etc. There are advantages to using the system browser and not an embedded browser – notably that a) any credentials presented in the browser window are not visible by the application b) any session established in the browser for one native application can be used for a second, enabling a SSO experience
    • may obtain the user’s consent for the operations for which the native application is requesting permission
  5. If step 4 is successful, the AS builds a URL in the scheme belonging to the native application and adds an authorization code to the end of the URL, e.g. ‘com.example.mobileapp://oauth?code=123456. The AS directs the user’s browser to redirect to this URL
  6. The browser queries the mobile OS to determine how to handle this URL. The OS determines the appropriate handler, and passes the URL to the appropriate application
  7. The native application parses the URL and extracts the authorization code from the end
  8. The native application sends the authorization code back to the AS
  9. The AS validates the authorization code and returns to the native application an access token (plus potentially other tokens)
  10. The native application then stores that access token away in secure storage so it can be subsequently used on API calls.

The current reality is that there is a security risk associated with Steps 6-8 above that could result in a malicious application being able to insert itself into the above flow and obtain the access token – and so be able to inappropriately access the business or personal data stored behind the API. The risk arises due to a combination of factors

  1. The nature of how native applications are distributed through public stores prevents individual instances of applications having unique (or secret) credentials. Consequently, it is not currently practical to expect that the native application can authenticate to the AS when exchanging the code for tokens in Step 8. As a result, if a malicious application is able to get hold of the code, it will be able to exchange that code for the desired tokens.
  2. In Step 6, the handoff of the authorization code can be intercepted if a malicious application is able to ‘squat’ on the URL scheme, i.e., get itself registered as the handler for those URLs. The mobile OSs differ in how they protect against such squatting – for instance, Android prompts the user to choose from between multiple apps claiming the same scheme, iOS does not.
  3. The current industry reality is that access tokens are predominantly ‘bearer’ tokens, i.e., any actor that can gain possession of an access token can use it on API calls with no additional criteria (such as signing some portion of the API call with a key associated with the token).

PKSE (Proof Key for Code Exchange by OAuth Public Clients) is an IETF draft specification designed to mitigate the above risk by preventing a malicious application, having obtained the code by scheme squatting, being able to actually exchange it for the more fundamental access token.

PKSE allows the native application to create an ephemeral one-time secret and use that to authenticate to the AS on Step 8 in the above. A malicious application, even if able to steal the code, will not have this secret and so will be unable to trade the stolen code for the access token.pkce

If using PKSE, the overall flow is identical to the above, but with additional parameters added to certain messages. When the native application first loads the AS page in the browser (Step 3 above), it generates a code_verifier string (and may transform it through some mechanism) and passes that as a parameter on the URL. The AS stores away this string before returning the code back to the native application. When the native application then exchanges the code for the access token (Step 8 above), it will include the code_verifier string on that call. If the code_verifier is missing or doesn’t match that previously recorded, the AS will not return the access token.
Even if a malicious application is able to obtain a code, without the corresponding code_verifier it will be unable to turn that code into an access token, and so unable to access the business or personal data accessed through the APIs.

PKSE promises to provide an important security enhancement for the application of OAuth 2.0 to native applications by mitigating the risk of authorization codes being stolen by malicious applications installed on the device. In fact, the PKSE ‘trick’, that of using transient client secrets in order to authenticate to an AS when the client has no long-term secret, is being used in other applications, e.g. the Native Applications (NAPPS) WG underway in the OpenID Foundation .

May 25, 2015

Radovan Semančík - nLightPax [Technorati links]

May 25, 2015 02:50 PM

My recent posts about ForgeRock attracted a lot of attention. The reactions filled the spectrum almost completely. I've seen agreement, disagreement, peaceful and heated reactions. Some people were expressing thanks, others were obviously quite upset. Some people seem to take it as an attack on ForgeRock. This was not my goal. I didn't want to harm ForgeRock or anyone else personally. All I wanted is to express my opinion about a software that I'm using and write down the story of our beginnings. But looking back I can understand that this kind of expression might be too radical. I haven't though about that. I'm an engineer, not a politician. Therefore I would like to apologize to all the people that I might have hurt. It was not intentional. I didn't want to declare a war or anything like that. If you have understood it like that, please take this note as an offer of peace.

A friend of mine gave me a very wise advice recently. What has happened is a history. What was done cannot be undone. So, let it be. And let's look into the future. After all, if it haven't been for all that history with Sun, Oracle and ForgeRock we probably would not have the courage to start midPoint as an independent project. Therefore I think I should be thankful for this. Do not look back, look ahead. And it looks like there are great things silently brewing under the lid ...

(Reposted from https://www.evolveum.com/pax/)

May 22, 2015

Mark Dixon - OracleBots Generate a Majority of Internet Traffic [Technorati links]

May 22, 2015 06:16 PM

Bot1

According to the 2015 Bad Bot Landscape report, published by Distil Networks, only 40% of Internet traffic is generated by humans! Good bots (e.g. Googlebot and Bingbot for search engines) account for 36% or traffic, while bad bots account for 23%.

Bad bots continue to place a huge tax on IT security and web infrastructure teams across the globe. The variety, volume and sophistication of today’s bots wreak havoc across online operations big and small. They’re the key culprits behind web scraping, brute force attacks, competitive data mining, brownouts, account hijacking, unauthorized vulnerability scans, spam, man-inthe- middle attacks, and click fraud.

These are just averages. It’s much worse for some big players.

Bad bots made up 78% of Amazon’s 2014 traffic, not a huge difference from 2013. VerizonBusiness really cleaned up its act, cutting its bad bot traffic by 54% in 2014.

It was surprising to me that the US is the largest source for bad bot traffic.

The United States, with thousands of cheap hosts, dominates the rankings in bad bot origination. Taken in isolation, absolute bad bot volume data can be somewhat misleading. Measuring bad bots per online user yields acountry’s “Bad Bot GDP.”

Using this latter “bad bots per online user” statistic, the nations of Singapore, Israel, Slovenia and Maldives are the biggest culprits.

The report contains more great information for those who are interested in bots. Enjoy!

Julian BondGin & Germain [Technorati links]

May 22, 2015 05:37 PM
Gin & Germain
Today's Friday Night Cocktail is the Gin & Germain. Except that it's not St Germain. I've been given a bottle of Chase Elderflower liqueur which is a 20% Chase vodka and Elderflower concoction which is basically the same. The Gin is Adnam's Copper House from our trip last week to the Suffolk coast. So without further ado,

50ml Adnams Copper House Gin
25ml Chase Elderflower
75ml Fevertree tonic
Over Ice, Collins glass, Lime garnish.

It's really just a slightly sweeter G&T with some extra flavours but just the thing for a muggy evening that's an hour away from a downpour.

http://barnotes.co/recipes/gin-st-germaine
http://adnams.co.uk/spirits/our-spirits/distilled-gin/
http://williamschase.co.uk/collections/all-products/products/chase-rhubarb-liqueur-20
[from: Google+ Posts]

MythicsPart 3 of 3 - Database 12c ADO:  The Feet Hit The Street [Technorati links]

May 22, 2015 05:19 PM

In parts 1 and 2 of this series, we discussed the history of database compression, and the new features in Oracle 12c called…

May 21, 2015

Mark Dixon - OracleBig Day for Lindbergh and Earhart! [Technorati links]

May 21, 2015 11:43 PM

Today is the anniversary of two great events in aviation history.  On May 21, 1927, Charles Lindbergh landed in Paris, successfully completing the first solo, nonstop flight across the Atlantic ocean.  Five years later, on May 21, 1932, Amelia Earhart became the first pilot to repeat the feat, landing her plane in Ireland after flying across the North Atlantic.

Congratulations to these brave pioneers of the air!

LindbergEarhart

Both Lindberg’s Spirit of St. Louis and Earhart’s Lockheed Vega airplanes are now housed in the Smithsonian Air and Space Museum in Washington, DC.

Spirit St Louis 590

Lockheed Vega 5b Smithsonian

 

KatasoftEasy Unified Identity [Technorati links]

May 21, 2015 07:00 PM

Stormpath + OAuth Opengraph

Unified Identity is the holy grail of website authentication. Allowing your users to log into your website through any mechanism they want, while always having the same account details, provides a really smooth and convenient user experience.

Unfortunately, unified identity can be tricky to implement properly! How many times have you logged into a website with Google Login, for instance, then come back to the site later and created an account with email / password only to discover you now have two separate accounts! This happens to me all the time and is really frustrating.

In a perfect world, a user should be able to log into your website with:

And always have the same account / account data — regardless of how they choose to log in at any particular point in time.

Unified Identity Management

Over the past few months we’ve been collaborating with our good friends over at OAuth.io to build a unified identity management system that combines OAuth.io’s broad support for social login providers with Stormpath’s powerful user management, authorization and data security service.

Here’s how it works:

You’ll use OAuth.io’s service to connect your Google, Facebook, Twitter, or any other social login services.

You’ll then use Stormpath to store your user accounts and link them together to give you a single, unified identity for every user.

It’s really simple to do and works very well!

And because OAuth.io supports over 100 separate OAuth providers, you can allow your website visitors to log in with just about any service imaginable!

User Registration & Unification Demo

To see how it works, I’ve created a small demo app you can check out here: https://unified-identity-demo.herokuapp.com/

Everything is working in plain old Javascript — account registration, unified identity linking, etc.

Go ahead and give it a try! It looks something like this:

Unified Identity Demo

If you’d like to dig into the code yourself, or play around with the demo app on your own, you can visit our project repository on Github here: https://github.com/stormpath/unified-identity-demo

If you’re a Heroku user, you can even deploy it directly to your own account by clicking the button below!

Deploy

Configure Stormpath + OAuth.io Integration

Let’s take a look at how simple it is to add unified identity to your own web apps now.

Firstly, you’ll need to go and create an

Don’t worry — both are totally free to use.

Next, you’ll need to create a Stormpath Application.

Stormpath Create Application

Next, you’ll need to log into your OAuth.io Dashboard, visit the “Users Overview” tab, and enter your Stormpath Application name and credentials.

OAuth.io Stormpath Configuration

Finally, you need to visit the “Integrated APIs” tab in your OAuth.io Dashboard and add in your Google app, Facebook app, and Twitter app credentials. This makes it possible for OAuth.io to easily handle social login for your web app:

OAuth.io Social Configuration

Show Me the Code!

Now that we’ve got the setup stuff all ready to go, let’s take a look at some code.

The first thing you’ll need to do is activate the OAuth.io Javascript library in your HTML pages:

<script src="https://stormpath.com/static/js/oauth.min.js"></script>
<script>
  OAuth.initialize("YOUR_OAUTHIO_PUBLIC_KEY");
</script>

You’ll most likely want to include this at the bottom of the <head> section in your HTML page(s). This will initialize the OAuth.io library.

Next, in order to register a new user via email / password you can use the following HTML / Javascript snippet:

<form onsubmit="return register()">
  <input id="firstName", placeholder="First Name", required>
  <input id="lastName", placeholder="Last Name", required>
  <input id="email", placeholder="Email", required, type="email">
  <input id="password", placeholder="Password", required, type="password">
  <button type="submit"> Register
</form>
<script>
  function register() {
    User.signup({
      firstname: document.getElementById('firstName').value,
      lastname: document.getElementById('lastName').value,
      email: document.getElementById('email').value,
      password: document.getElementById('password').value
    }).done(function(user) {
      // Redirect the user to the dashboard if the registration was
      // successful.
      window.location = '/dashboard';
    }).fail(function(err) {
      alert(err);
    });
    return false;
  }
</script>

This will create a new user account for you, with the user account stored in Stormpath.

Now, in order to log a user in via a social provider (Google, Facebook, Twitter, etc.) — you can do something like this:

<script>
  function link(provider) {
    var user = User.getIdentity();
    OAuth.popup(provider).then(function(p) {
      return user.addProvider(p);
    }).done(function() {
      // User identity has been linked!
    });
  }
</script>
<button type="button" onclick="link('facebook')">Link Facebook</button>
<button type="button" onclick="link('google')">Link Google</button>
<button type="button" onclick="link('twitter')">Link Twitter</button>

If a user clicks any of the three defined buttons above, they’ll be prompted to log into their social account and accept permissions — and once they’ve done this, they’ll then have their social account ‘linked’ to their normal user account that was previously created.

Once a user’s account has been ‘linked’, the user can log into any of their accounts and will always have the same account profile returned.

Nice, right?!

Simple Identity Management

Hopefully this quick guide has shown you how easy it can be to build unified identity into your next web application. With Stormpath and OAuth.io, it can take just minutes to get a robust, secure, user authentication system up and running.

To dive in, check out the demo project on github: https://github.com/stormpath/unified-identity-demo

Happy Hacking!

May 20, 2015

Nat SakimuraJWSとJWTがRFCになりました! [Technorati links]

May 20, 2015 02:35 AM

ietf-logoずいぶん長くかかりましたが[1]、JSON Web Signature (JWS)とJSON Web Token (JWT) がようやく Standard Track の RFC[2]になりました。それぞれ、[RFC7515]と[RFC7519]です。

ご存じない方のために申し上げますと、JWSはJSONにデジタル署名するための規格です。XML署名のJSON版ですね。JSONシリアライゼーションとCompactシリアライゼーションの2種類あり、Compactシリアライゼーションがあります。

JWTは、このCompactシリアライゼーションのJWSに、いくつかの有用なパラメータ名を導入して、ログイン情報やアクセス許可情報を伝達できるようにしたものです。主にRESTfulなシステムでの利用を想定していますが、もちろんそれ以外でも利用可能です。既に、GoogleもMicrosoftも大規模に実装して使っています。おそらく、皆さん知らないうちに使ってるんですよね。しかし、RFCになる前から大規模に導入してしまう…しかも、Googleの場合はAndroidに入れてしまってますから、もし変更があったらアップデートが大変なわけですが…勇気には頭が下がります。

というわけで、晴れてRFCになったわけなので、皆さんも心置きなくお使いください。

[1] JSON Simple Sign が2010年だから、5年がかりですね…。IETFでJOSE WGができたのが2011年11月、えらく長くかかりました。
[2] RFCには、Informational, Experimental, Standard と3つのトラックがあり、いわゆる「標準」とされるのはStandard Trackだけです。良く引用されるRFCも、多くはInformationalだったりするので、注意してみてみてください。
[RFC7515] http://www.rfc-editor.org/info/rfc7515
[RFC7519] http://www.rfc-editor.org/info/rfc7519

Mike Jones - MicrosoftJWT and JOSE are now RFCs! [Technorati links]

May 20, 2015 12:54 AM

IETF logoThe JSON Web Token (JWT) and JSON Object Signing and Encryption (JOSE) specifications are now standards – IETF RFCs. They are:

This completes a 4.5 year journey to create a simple JSON-based security token format and underlying JSON-based cryptographic standards. The goal was always to “keep simple things simple” – making it easy to build and deploy implementations solving commonly-occurring problems using whatever modern development tools implementers chose. We took an engineering approach – including features we believed would be commonly used and intentionally leaving out more esoteric features, to keep the implementation footprint small. I’m happy to report that the working groups and the resulting standards stayed true to this vision, with the already widespread adoption and an industry award being testaments to this accomplishment.

The origin of these specifications was the realization in the fall of 2010 that a number of us had created similar JSON-based security token formats. Seemed like it was time for a standard! I did a survey of the choices made by the different specs and made a convergence proposal based on the survey. The result was draft-jones-json-web-token-00. Meanwhile, Eric Rescorla and Joe Hildebrand had independently created another JSON-based signature and encryption proposal. We joined forces at IETF 81, incorporating parts of both specs, with the result being the -00 versions of the JOSE working group specs.

Lots of people deserve thanks for their contributions. Nat Sakimura, John Bradley, Yaron Goland, Dirk Balfanz, John Panzer, Paul Tarjan, Luke Shepard, Eric Rescorla, and Joe Hildebrand created the precursors to these RFCs. (Many of them also stayed involved throughout the process.) Richard Barnes, Matt Miller, James Manger, and Jim Schaad all provided detailed input throughout the process that greatly improved the result. Brian Campbell, Axel Nennker, Emmanuel Raviart, Edmund Jay, and Vladimir Dzhuvinov all created early implementations and fed their experiences back into the spec designs. Sean Turner, Stephen Farrell, and Kathleen Moriarty all did detailed reviews that added ideas and improved the specs. Matt Miller also created the accompanying JOSE Cookbook – RFC 7520. Chuck Mortimore, Brian Campbell, and I created the related OAuth assertions specs, which are now also RFCs. Karen O’Donoghue stepped in at key points to keep us moving forward. Of course, many other JOSE and OAuth working group and IETF members also made important contributions. Finally, I want to thank Tony Nadalin and others at Microsoft for believing in the vision for these specs and consistently supporting my work on them.

I’ll close by remarking that I’ve been told that the sign of a successful technology is that it ends up being used in ways that the inventors never imagined. That’s certainly already true here. I can’t wait to see all the ways that people will continue to use JWTs and JOSE to build useful, secure applications!

May 19, 2015

Mike Jones - MicrosoftThe OAuth Assertions specs are now RFCs! [Technorati links]

May 19, 2015 11:56 PM

OAuth logoThe OAuth Assertions specifications are now standards – IETF RFCs. They are:

This completes the nearly 5 year journey to create standards for using security tokens as OAuth 2.0 authorization grants and for OAuth 2.0 client authentication. Like the JWT and JOSE specs that are now also RFCs, these specifications have been in widespread use for a number of years, enabling claims-based use of OAuth 2.0. My personal thanks to Brian Campbell and Chuck Mortimore for getting the ball rolling on this and seeing it through to completion, to Yaron Goland for helping us generalize what started as a SAML-only authorization-grant-only spec to a framework also supporting client authentication and JWTs, and to the OAuth working group members, chairs, area directors, and IETF members who contributed to these useful specifications.

Mark Dixon - OracleTuring Test (Reversed) [Technorati links]

May 19, 2015 10:13 PM

Turing1

The classic Turing Test, according to Wikipedia, is:

a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human. Alan Turing proposed that a human evaluator would judge natural language conversations between a human and a machine that is designed to generate human-like responses. …

The test was introduced by Turing in his 1950 paper “Computing Machinery and Intelligence.” …

As illustrated in the first diagram:

The “standard interpretation” of the Turing Test, in which player C, the interrogator, is given the task of trying to determine which player – A or B – is a computer and which is a human. The interrogator is limited to using the responses to written questions to make the determination. …

In the years since 1950, the test has proven to be both highly influential and widely criticised, and it is an essential concept in the philosophy of artificial intelligence.

Turing2

What if the roles were reversed, and a computer was tasked with determining which of the entities on the other side of the wall was a human and which was a computer?  Such is the challenge for software that needs to decide which requests made to an online commerce system are generated by humans typing on a browser, and which are illicit bots imitating humans.

By one year-old estimate, ”more than 61 percent of all Web traffic is now generated by bots, a 21 percent increase over 2012.” Computers must automatically determine which requests come from people and which come from bots, as illustrated in the second diagram.

While this is not strictly a Turing test, it has some similar characteristics.  The computer below the line doesn’t know ahead of time what techniques the bots will use to imitate human interaction. These decisions need to be made in real time and be accurate enough to prevent illicit bots from penetrating the system. A number of companies offer products or services that accomplish this task.

One might ask, “Does this process of successfully choosing between human and bot constitute artificial intelligence?”

At the current state of the art, I think not, but it is area where enhanced computer intelligence could provide real value.

Radiant LogicIn the Land of Customer Profiles, SQL and Data Integration Reign Supreme [Technorati links]

May 19, 2015 06:27 PM

Last week, we took a look at the challenges faced by “traditional IAM” vendors as they try to move into the customer identity space. Such vendors offer web access management and federation packages that are optimized for LDAP/AD and aimed at employees. Now we should contrast that with the new players in this realm and explore how they’re shaping the debate—and growing the market.

Beyond Security with the New IAM Contenders: Leveraging Registration to Build a More Complete Customer Profile

So let’s review the value proposition of the two companies that have brought us this new focus on customer identity: Gigya and Janrain. For these newcomers, the value is not only about delivering security for access or a better user experience through registration. They’re also aimed at leveraging that registration process to collect data for a complete customer profile, moving from a narrow security focus to a broader marketing/sales focus—and this has some consequences for the identity infrastructure and services needed to support these kind of operations.

For these new contenders, security is a starting point to serve better customer knowledge, more complete profiles, and the entire marketing and sales lifecycle. So in their case it is not only about accessing or recording customer identities, it’s about integrating and interfacing this information into the rest of the marketing value chain, using applications such as Marketo and others to build a complete profile. So one of the key values here is about collecting and integrating customer identity data with the rest of the marketing/sales activities.

At the low level of storage and data integration, that means the best platform for accomplishing this would be SQL—or better yet, a higher-level “join” service that’s abstracted or virtual, as in the diagram below. It makes sense that you’d need some sort of glue engine to join identities with the multiple attributes that are siloed across the different processes of your organization. And we know that LDAP directories alone, without some sort of integration mechanism, are not equipped for that. In fact, Gigya, the more “pure play” in this space, doesn’t even use LDAP directories; instead, they store everything in a relational database because SQL is the engine for joining.

Virtualization Layer

So if we look at the customer identity market through this lens of SQL and the join operation, I see a couple of hard truths for the traditional IAM folks:

  1. First, if we’re talking about using current IAM packages in the security field for managing customer access, performance and scalability are an issue due to the “impedance” problem. Sure, your IAM package “supports” SQL but it’s optimized for LDAP, so unless you migrate—or virtualize—your customers’ identity from SQL to LDAP in the large volumes that are characteristic of this market, you’ll have problems with the scalability and stability of your solution. (And this does not begin to cover the need for flexibility or ease of integration with your existing applications and processes dealing with customers).
  2. And second, if you are looking at leveraging the customer registration process as a first step to build a complete profile, your challenge is more in data/service integration than anything else. In that case, I don’t see where there’s a play for “traditional WAM” or “federation” vendors that stick to an LDAP model, because no one except those equipped with an “unbound” imagination would use LDAP as an engine for integration and joining… :)

The Nature of Nurturing: An Object Lesson in Progressive, Contextual Disclosure

Before we give up all hope on directories (or at least on hierarchies, graphs, and LDAP), let’s step beyond the security world for a second and look at the marketing process of nurturing prospect and customer relationships. Within this discipline, a company deals with prospects and customers in a progressive way, guiding them through each stage of the process in a series of steps and disclosing the right amount of information within the right context. And of course, it’s natural that such a process could begin with the registration of a user.

We’ll step through this process in my next post, so be sure to check back for more on this topic…

SHARE
facebooktwittergoogle_pluslinkedinmail

The post In the Land of Customer Profiles, SQL and Data Integration Reign Supreme appeared first on Radiant Logic, Inc

Matthew Gertner - AllPeersFive Young British Athletes To Watch Out For At The 2016 Rio Olympics [Technorati links]

May 19, 2015 04:24 AM

At the upcoming Summer Olympics in Rio, there will be many young British athletes to watch ... photo by CC user 39405339@N00 on Flickr

The 2014 Nanjing Youth Olympics in China were a seen as a huge success for Team GB with a haul of 24 medals, but which British youngsters will we be seeing with Olympic honours in Rio 2016?

JESSICA FULLALOVE, 18 (SWIMMING)

Jessica Fullalove has her standards set high. She believes that she can be the female equivalent to Michael Phelps and the “Usain Bolt of the pool”. With three silver medals at the Youth Olympics in Nanjing and a senior Commonwealth appearance under her belt, no one is ruling out Jessica’s potential.

The 18 year old from Oldham is already one of UK’s most exciting new sporting stars and there’s no doubt that, by the time she hits the water in Rio, Britain with be full of love for Jessica Fullalove.

SALLY BROWN, 19 (TRACK)

Sally Brown’s progress on the world stage has been hampered by injuries for the last few years but her abilities are highly rated by everyone within the sporting community. Having successfully recovered from the same injury herself, Jessica Ennis-Hill has offered her personal assurance that Sally can make a full recovery from the stress fracture to her right foot that has kept her out of so many competitions in the last year.

Sally, who receives financial help and sports insurance from Sportsaid and Bluefin Sport, has also had to work part time at Sainsbury’s while recovering from her injury as well as tackle her A-Levels at the same time. Despite all this she is predicted to return to the track physically and mentally stronger than ever and at her best she has a real of chance of being among the medals at Rio’s 2016 Paralympics

MORGAN LAKE, 17 (TRACK)

With Olympic champion Jessica Ennis-Hill and British high jump and indoor long jump record holder Katarina Johnson-Thompson both already tipped to be competing for heptathlon medals in Rio it’s unlikely that there is room for another British heptathlon hopeful, but 17-year-old Morgan Lake may have something to say about that. Morgan is not only the double junior heptathlon and high jump world champion but she is also performing and scoring significantly better than Jessica Ennis-Hill and Katarina Johnson-Thompson were at her age.

Last year she broke the world indoor pentathlon record with 4,284 points and the junior high jump record with a jump of 1.93m, so don’t be surprised to find Morgan out breaking more recordings at a senior level in Rio.

CLAUDIA FRAGAPANE, 17 (GYMNASTICS)

Claudia Fragapane is only 4ft 5″ tall but her achievements are already looming large in the gymnastic world. An astonishing four gold medals at the 2014 Commonwealth Games (the first British woman in to achieve this feat in 84 years) saw her go on to win BBC Young Sports Personality of the Year and she is hoping to reproduce that success in Rio.

While 17 may not be that young in the gymnastics, Claudia Fragapane is a British Athlete that has to be mentioned on this list as she has the potential to not only clinch some medals in Rio but also to evolve into one of Britain’s greatest ever gymnasts.

CHRIS MEARS, 22 (DIVING)

Never write off Chris Mears. This is a man who has made a habit of accomplishing seemingly impossible tasks. After unknowingly suffering with enlarged glands from glandular fever, Mears ruptured his spleen while competing at the 2009 Australian Youth Olympic Festival in Sydney.

Having lost five pints of blood and with a blood platelet count at five down from around 400, his parents were informed that his chances of survival were incredibly slim. Mears did survive and seemed to be making an extraordinary recovery when an unexpected seizure put him into a coma.

Doctors informed his family that he was likely to have suffered irreparable brain damage from the severity of the seizure but again Mears defied science and made a full recovery. He was told that he would never dive again but by 2012 he was competing in the London Olympics and by 2014 had won a gold medal at the Commonwealth Games with his partner Jack Laugher. Chris is now waiting for someone to tell him that he won’t be able to win a medal in Rio.

The post Five Young British Athletes To Watch Out For At The 2016 Rio Olympics appeared first on All Peers.

Matthew Gertner - AllPeersGreat Ways to Save on Your Satellite Television Service [Technorati links]

May 19, 2015 04:24 AM

How can you save on your satellite television service? ... photo by CC user Loadmaster  on wikimedia

I love watching television as much as the next person. Being able to catch up on some of my favorite television shows throughout the week, and even watching a flick or two with the family on the weekends can be a great pastime.

However, if you’re like me, you too have a busy schedule that doesn’t allow you to watch television hour after hour. I probably watch a total of ten hours of television (on a lucky week), yet the subscription services can often be kind of costly. Rather than ditch the television service altogether, I decided to switch from cable to satellite services while still looking for huge savings.

Here are a few options you might try to save on your satellite television service at home:

Bundling Services

One option that satellite television service providers have is bundling. This essentially means that you’re able to package your television, internet, and phone services into one for a discounted price. By choosing a bundle package, consumers can save as much as 10-20% on their monthly bill.

However, since satellite television service providers don’t have their own internet and phone services, you will have to receive the bundled discounts through their partnering service providers. There are several for you to choose from so that you can get the landline features and internet speed you need.

Compare Packages

Another way to get your satellite television bill down is to compare the various packages that are offered. Not only should you compare packages between various satellite subscription providers, but you should also compare packages within each company.

Each service provider has several packages (generally a basic, premium, and platinum package). Review each of the packages available to see which one will give you the best options for landline features, internet speeds and channel line ups. Visit tvlocal.com to go over the various packages on offer and choose which one will work best for your entertainment and communication needs.

Ask About Specials

Companies are looking to gain new customers on the regular basis. Therefore, if you really want to get a good deal, contact the company directly to find out what types of specials they might be able to offer you. Sometimes, you’ll find that a customer service representative is willing to offer you more discounts simply to get you signed up as a customer.

Upgraded Technology

Generally with a cable subscription service, you’ll need to have a cable box for every television set you have available. This can cost you an additional rental fee for each box you have. However, satellite service providers have new technology that will allow you to purchase wireless boxes that can connect and be used for multiple television sets. Several satellite television providers also give a huge savings for new customers looking to purchase the latest technology.

If you’re looking for convenient yet affordable ways to save money on your monthly television subscription services, these ideas will certainly help you save a bundle. When choosing the best television service provider, be sure to also compare things such as overall value, channel lineup, and features to get the best bank for your buck.

Now you can keep up with all the latest television shows and movies without having to break the bank. Here’s to binge watching and comfort foods. Enjoy.

The post Great Ways to Save on Your Satellite Television Service appeared first on All Peers.

May 18, 2015

KatasoftREST VS SOAP: When Is REST Better? [Technorati links]

May 18, 2015 11:49 PM

While the SOAP (Simple Object Access Protocol) has been the dominant approach to web service interfaces for a long time, REST (Representational State Transfer) is quickly winning out and now represents over 70% of public APIs.

REST is simpler to interact with, particularly for public APIs, but SOAP is still used and loved for specific use cases. REST and SOAP have important, frequently overlooked differences, so when building a new web service, do you know which approach is right for your use case?

Spoiler Alert: USE REST+JSON. Here’s Why…

SOAP: The Granddaddy of Web Services Interfaces

SOAP is a mature protocol with a complete spec, and is designed to expose individual operations – or pieces of operations – as web services. One of the most important characteristics of SOAP is that it uses XML rather than HTTP to define the content of the message.

The Argument For SOAP

SOAP is still offered by some very prominent tech companies for their APIs (Salesforce, Paypal, Docusign). One of the main reasons: legacy system support. If you built a connector between your application and Salesforce back in the day, there’s a decent probability that connection was built in SOAP.

There are a few additional situations:

Some would argue that because of these features, as well as support for WS_AtomicTransaction and WS_Security, SOAP can benefit developers when there is a high need for transactional reliability.

REST: The Easy Way to Expose Web Services

And yet, most new APIs are built in REST+JSON. Why?

First, REST is easy to understand: it uses HTTP and basic CRUD operations, so it is simple to write and document. This ease of use also makes it easy for other developers to understand and write services against.

REST also makes efficient use of bandwidth, as it’s much less verbose than SOAP. Unlike SOAP, REST is designed to be stateless and REST reads can be cached for better performance and scalability.

REST supports many data formats, but the predominant use of JSON means better support for browser clients. JSON sets a standardized method for consuming API payloads, so you can take advantage of its connection with JavaScript and the browser. Read our best practices on REST+JSON API Design Here.

Case 1: Developing a Public API

REST focuses on resource-based (or data-based) operations, and inherits its operations (GET, PUT, POST, DELETE) from HTTP. This makes it easy for both developers and web-browsers to consume it, which is beneficial for public APIs where you don’t have control over what’s going on with the consumer. Simplicity is one of the strongest reasons that major companies like Amazon and Google are moving their APIs from SOAP to REST.

Case 2: Extensive Back-and-Forth Object Information

APIs used by apps that require a lot of back-and-forth messaging should always use REST. For example, mobile applications. If a user attempts to upload something to a mobile app (say, an image to Instagram) and loses reception, REST allows the process to be retried without major interruption, once the user regains cell service.

However, with SOAP, the same type of service would require more initialization and state code. Because REST is stateless, the client context is not stored on the server between requests, giving REST services the ability to be retried independently of one another.

Case 3: Your API Requires Quick Developer Response

REST allows easy, quick calls to a URL for fast return responses. The difference between SOAP and REST in this case is complexity—-SOAP services require maintaining an open stateful connection with a complex client. REST, in contrast, enables requests that are completely independent from each other. The result is that testing with REST is much simpler.

Helpfully, REST services are now well-supported by tooling. The available tools and browser extensions make testing REST services continually easier and faster.

Developer Resources for REST+JSON API Development

Stormpath is an REST+JSON API-based authentication and user management system for your web and mobile services and APIs. We <3 REST+JSON.

If you want learn more about how to build, design, and secure REST+JSON APIs, here are some developer tutorials and explainer blogposts on REST+JSON API Development:

Mark Dixon - OracleSecurity: Complexity and Simplicity [Technorati links]

May 18, 2015 11:48 PM

Leobruce

It is quite well documented that Bruce Schneier stated that “Complexity is the worst enemy of security.

As a consumer, I think this complexity is great. There are more choices, more options, more things I can do. As a security professional, I think it’s terrifying. Complexity is the worst enemy of security.  (Crypto-Gram newsletter, March 15, 2000)

Leonardo da Vinci is widely credited with the the statement, “Simplicity is the ultimate sophistication,” although there is some doubt whether he actually said those words.

Both statements have strong implications for information security today.

In the March, 2000 newsletter, Bruce Schneier suggested five reasons why security challenges rise as complexity increases:

  1. Security bugs.  All software has bugs. As complexity rises, the number of bugs goes up.
  2. Modularity of complex systems.  Complex systems are necessarily modular; security often fails where modules interact.
  3. Increased testing requirements. The number of errors and difficulty of evaluation grown rapidly as complexity increases.
  4. Complex systems are difficult to understand. Understanding becomes more difficult as the number of components and system options increase.
  5. Security analysis is more difficult. Everything is more complicated – the specification, the design, the implementation, the use, etc.

In his February 2015 article, “Is Complexity the Downfall of IT Security,”  Jeff Clarke suggested some other reasons:

  1. More people involved. As a security solution becomes more complex, you’ll need more people to implement and maintain it. 
  2. More countermeasures. Firewalls, intrusion-detection systems, malware detectors and on and on. How do all these elements work together to protect a network without impairing its performance? 
  3. More attacks. Even if you secure your system against every known avenue of attack, tomorrow some enterprising hacker will find a new exploit. 
  4. More automation. Removing people from the loop can solve some problems, but like a redundancy-management system in the context of reliability, doing so adds another layer of complexity.

And of, course, we need to consider the enormous scale of this complexity.  Cisco has predicted that 50 billion devices will be connected to the Internet by 2020.  Every interconnection in that huge web of devices represents an attack surface.

How in the world can we cope? Perhaps we need to apply Leonardo’s simplicity principle.

I think Bruce Schneier’s advice provides a framework for simplification:

  1. Resilience. If nonlinear, tightly coupled complex systems are more dangerous and insecure, then the solution is to move toward more linear and loosely coupled systems. This might mean simplifying procedures or reducing dependencies or adding ways for a subsystem to fail gracefully without taking the rest of the system down with it.  A good example of a loosely coupled system is the air traffic control system. It’s very complex, but individual failures don’t cause catastrophic failures elsewhere. Even when a malicious insider deliberately took out an air traffic control tower in Chicago, all the planes landed safely. Yes, there were traffic disruptions, but they were isolated in both time and space.
  2. Prevention, Detection and Response. Security is a combination of prevention, detection, and response. All three are required, and none of them are perfect. As long as we recognize that — and build our systems with that in mind — we’ll be OK.This is no different from security in any other realm. A motivated, funded, and skilled burglar will always be able to get into your house. A motivated, funded, and skilled murderer will always be able to kill you. These are realities that we’ve lived with for thousands of years, and they’re not going to change soon. What is changing in IT security is response. We’re all going to have to get better about IT incident response because there will always be successful intrusions.

But a final thought from Bruce is very appropriate. “In security, the devil is in the details, and those details matter a lot.”

May 15, 2015

Mark Dixon - OracleJust Another Day at the Office [Technorati links]

May 15, 2015 09:35 PM

Today’s featured photo from NASA show the Space Station’s crew in an ordinary day of work.

NASA150515

The six-member Expedition 43 crew worked a variety of onboard maintenance tasks, ensuring crew safety and the upkeep of the International Space Station’s hardware. In this image, NASA astronauts Scott Kelly (left) and Terry Virts (right) work on a Carbon Dioxide Removal Assembly (CDRA) inside the station’s Japanese Experiment Module.

For just a day or two, it would be so fun to work in weightless conditions.  Not too probable at this stage of my life, however!

 

GluuOAuth 2.0 as the Solution for Three IoT Security Challenges [Technorati links]

May 15, 2015 05:08 PM

Note: This article was originally published as a guest blog for Alien Vault.

Ideas on managing IoT in your house

While participating on the Open Interconnect Consortium Security Task Group, I offered to describe a use case for Internet of Things (IOT) security that would illustrate how OAuth2 could provide the secret sauce to make three things possible that were missing from their current design: (1) leveraging third party digital credentials (2) centrally managing access to IOT resources in a vendor neutral way; and (3) machine-to-machine discovery and authentication.

IOT physical door locks provide a concrete use case that has intrigued me for a long time–what could be more fundamental to access management than controlling who can enter your house? Wouldn’t it be great if the person could use their state-issued driver’s license to unlock your front door? Two standard profiles of OAuth2 can make this possible: OpenID Connect (to identify you using your driver’s license), and the User Managed Access protocol (UMA), to centralize policy management.

Trusted Credentials & Standard APIs

The idea of a state-issued digital credential is not that crazy. Many countries have digital identifiers. In Switzerland, you can obtain a government issued digital ID in the form of a USB stick called SwissID. But your mobile phone has the potential to be a more convenient credential than a USB stick. And this is exactly the goal of several state issued mobile driver licenses concepts proposed by Delaware and Iowa.

But what API’s will your state publish to enable authorized Web, mobile, or IOT clients to use this new mobile credential? The most likely candidate is the above mentioned OAuth2 profile for authentication: OpenID Connect. Developers are already familiar with OpenID Connect if they’ve ever used the Google authentication API’s.

So, in our hypothetical scenario, we now have our third party digital credential–a state mobile drivers license–and we have OpenID Connect API’s, published by the state, with which to identify the person who was issued the mobile drivers license. The next component to our system is a central security management user interface to enable the homeowner to manage who has the capability to access their home. Conveniently, this same Console can be used to control other IOT devices that have API’s.

Central Permission Management

The reason we need a central management user interface is simple–if every IOT device in your home has its own security management web interface, it won’t scale. There are all sorts of new decisions consumers will have to make. For example:

Using a central policy decision point, people can manage in one place which policies apply to what, without having to go to the web admin page of every device. For short, let’s call this thing the “Console.”

So let’s walk through in a little more detail how this use case would work:

  1. The homeowner would configure their Console to rely on the OpenID Provider (OP) of certain domains. For this example, let’s say there are two domains: 1) mystate.gov and 2) the local domain for your house. You might want a local domain to manage accounts for people who don’t have a driver’s license, like your young kids. For people in your local domain, you’ll also have to manage their credentials, i.e. passwords. This might be a pain in the neck, but at least you don’t have to manage users for every IOT device in your house.
  2. Using OpenID Connect Discovery, the Console could immediately find out the local and state OpenID Connect API URLs, and other information required to securely identify a person at the external domain. The OpenID Connect Discovery spec is very simple. Just make an HTTPS GET request to https:///.well-known/openid-configuration. This will return a JSON object with the URLs for the API’s of the OP, and other information your Console will need, like what kind of cypto is supported, and what kind of authentication is available. If you want to see an example of an OpenID Connect Discovery response, check out Gluu’s OpenID Connect discovery page.
  3. Next your Console would dynamically register itself as a client with the state OpenID Connect Provider (OP) using the OpenID Connect dynamic client registration API. Once completed, the Console will be able to authenticate a person in the respective domain.
  4. The person using the console could then define a policy that describes how a person entering the house should be authorized–for example, using what credentials and during what time of day.
  5. The door lock would use OpenID Connect Discovery and Dynamic Client Registration to register itself with the Console.
  6. The door lock would rely on the console to handle the person’s authentication. The console would call the OpenID Connect authentication APIs at the state, which could result in the state sending a PUSH notification to the person’s pre-registered mobile device. The person might see an alert that says something like “10 Oak Drive Security System wants to verify your first and last name. Is it ok to release this information?” Once approved, the policy decision point can use that information for policy evaluation. For example, perhaps I made a policy that said to let John Smith enter my house from 10am – 2pm. A PUSH notification could be combined with biometric, cognitive, or other physical tokens to make the identification multi-factor. The policy in the Console could even require one of these mechanisms by specifying a specific ‘acr’, that would be provided as part of the response by the state OpenID Connect provider.
  7. The Console has a few ways it could handle enrollment–which users are allowed to enter the house. Access requests could be queued for approval by the homeowner, or perhaps the homeowner registers the person in advance.
  8. What would the interface look like for the door lock? How would the person assert who they are, or enter their username and password for locally managed credentials? Here are a few ideas: the person could enter a short URL on the mobile browser of their phone; the person could read the URL via NFC; a smart service provider could provide an app that uses the geolocation to find the nearest lock.

Of course the door lock might have some fallback mechanisms. Perhaps if Wifi is down, the door might fallback to some kind of proprietary Bluetooth mechanism. Or have a physical key, like a USB key that opens the lock. Or even a high-tech piece of metal, cut in a special unique way that fits into a mechanical unlocking apparatus! Now that would be cool! Oh wait, that’s the key we have today!

May 14, 2015

GluuPart III: Gluu proposes free API Security Certification for Open Source community [Technorati links]

May 14, 2015 08:20 PM

traditional-origami-pigeon

Note: This is Part III of a three part series. Part I and II are published here and here, respectively.

Its so easy to acquiesce to a technology world decided for us by technology giants. However, when it comes to security, if we acquiesce to standards that are too low, it could put the breaks on our ability to benefit from new services.

Consumers, businesses, government and education–every segment of society is being effected by a fundamental shift to digital transactions. The backbone of these digital transactions are APIs. API security certification may seem like a tiny consideration, but the stakes are high.

Gluu very much wants to see a level playing field for API security certification–especially for free open source software (FOSS). We want to see a world where FOSS is the default choice for domain access management. For this reason, we are considering whether Gluu should contribute a portion of its revenues to fund an independent API Security Certification organization, which would enable websites, vendors, and organizations to self-certify for free.

Gluu would not control the organization, in fact there should be a firewall between funding and management. The organization should form policies that exhibit fairness, empathy, and genuine concern for the interest of the community. The goal of certification would be to provide tools and services to engineers to help them get security right. And to provide the public with up-to-date information about how software conforms with the standards defined.

The goal is not to provide a legal mechanism that shifts liability. Gluu believes this important function can be handled between the parties using the technology. More efficient hub-and-spoke legal trust models, such as InCommon in the education sector, can enable a scalable way for people and organizations to manage their relationships with other domains.

There are new standards for API security that are available today and in development. The Internet needs an innovative, comprehensive and democratic certification program for API security. In this instance, we should simply not acquiesce.

Note: This is Part III of a three part series. Part I and II are published here and here, respectively.
 

KatasoftFive Practical Tips for Building Your Java API [Technorati links]

May 14, 2015 07:00 PM

Increasingly, Java developers are building APIs for their own apps to consume as part of a micro-services oriented architecture, or for consumption by external services. At Stormpath we do both, and we’re expert in the “complications” this can create for a development team. Many teams find it difficult to manage authentication and access control to their APIs, so we want to share a few architectural principles and tips to make it easier to manage access to your Java API.

For a bit of context: Stormpath at its core, is a Java-based REST+JSON API, built on the Spring Framework using Apache Shiro as an application security layer. We store user credentials and data on behalf of other companies, so for us security is paramount. Thus, my first requirement for these tips is that they help manage access to your Java API securely.

We also evaluated tips based on whether they work well in a services-based architecture like ours, whether they benefit both internal and public APIs, and whether they offer developers increased speed and security.

On to the fun part!

Secure Authentication Requests with TLS

I like to think of an API as a super-highway of access to your application. Allowing basic authentication requests without TLS support is like allowing people to barrel down your highway… drunk… in a hydrogen powered tank… When that requests lands, it has the potential to wreak havoc in your application.

We have written extensively on how to secure your API and why you shouldn’t use password-based authentication in an API. When in doubt, at a bare minimum, use Basic Authentication with TLS.

How you implement TLS/SSL is heavily dependent on your environment – both Spring Security and Apache Shiro support it readily. But I think devs frequently omit this step because it seems like a lot of work. Here is a code snippet that shows how a servlet retrieves certificate information:

public void doGet(HttpServletRequest req, HttpServletResponse res) throws ServletException, IOException {
..........
X509Certificate[] certs = (X509Certificate[]) req.getAttribute("javax.servlet.request.X509Certificate");
.........

Not a lot of work. I’ve posted some links to different tutorials at the bottom if you’re new to TLS or want to ensure you’re doing it right. Pro tip: Basic Auth with TLS is built into Stormpath SDKs.

Build Your Java Web Service with Spring Boot

Spring Boot is a fantastic way to get a Java API into production without a lot of setup. As one blogger wrote, “It frees us from the slavery of complex configuration files, and helps us to create standalone Spring applications that don’t need an external servlet container.” Love. It.

There are a ton of tutorials (see below) on building Restful Web services with Spring Boot, and the great thing about taking this approach is that much of the security is pre-built, either through a sample application or a plugin. SSL in Spring Boot, for instance, is configured by adding a few lines to your application.properties file:

server.port = 8443 server.ssl.key-store = classpath:keystore.jks server.ssl.key-store-password = secret server.ssl.key-password = another-secret

Stormpath also offers Spring Boot Support, so you can easily use Stormpath’s awesome API authentication and security features in your Spring Boot App.

Use a Java API to Visualize Data…About your Java API

So meta. Because APIs are (potentially, hopefully!) managing lots of data, and the availability of that data is critical to downstream connections. For APIs in production, analytics and monitoring are critical. Most users won’t warn you before they start load testing, and good usage insight helps you plan your infrastructure as your service grows.

Like other tech companies (Coursera, Indeed, BazaarVoice), we use DataDog to visualize our metrics and events. They have strong community support for Java APIs, plus spiffy dashboards.

Also, we project our dashboards on a wall in the office. This certainly doesn’t replace pager alerts on your service, but it helps make performance transparent to the whole team. At Stormpath, this has been a great way to increase and inform discussion about our service delivery and infrastructure – across the team.

Encourage Your Users To Secure Their API Keys Properly

The most terrifying thing that happens to me at work is when someone emails me their Stormpath API key. It happens with shocking frequency, so we went on a fact-finding mission and discovered a scary truth: even awesome devs use their email to store their API key and/or password… and occassionally accidentally hit send. When these land in my inbox, a little part of me dies.

Some ideas: – Make your API key button red. I’m not above scaring people with a red button. – At Stormpath we encourage storing the API key/secret in a file only readable by the owner. You can instruct your users to do this via the terminal, regardless of what language they are working in. Our instructions look like this (for an api key named apiKey.properties):

Save this file in a secure location, such as your home directory, in a hidden .stormpath directory. For example:

$ mkdir ~/.stormpath

$ mv ~/Downloads/apiKey.properties ~/.stormpath/

Change the file permissions to ensure only you can read this file. For example:

$ chmod go-rwx ~/.stormpath/apiKey.properties

Stormpath Makes API Authentication, Tokens and Scopes Easy

Finally, a shameless plug: Stormpath automates a lot of functionality for Java WebApps. Our API security features generate your API keys, manage authentication to your API, allow you to control what users with keys have access to via groups and custom permissions, and manage scopes and tokens.

We lock down access with awesome Java security infrastructure. Stormapth for API Security and Authentication and our API Authentication Guide. These features work with all our Java SDKs for Servlets, Spring Boot, Apache Shiro and Spring Security and will save you tons of time.

As ever, feel free to comment if we missed anything or you have additional suggestions. If you want help with your Stormpath setup, email support@stormpath.com and a technical human will get back to you quickly.

Resources

TLS Tutorials & Sample Apps

Spring Boot your REST API service

Ben Laurie - Apache / The BunkerDuck with Orange and Fennel [Technorati links]

May 14, 2015 04:28 PM

Duck breasts
Honey
Soy sauce
Fennel bulbs
Orange

Sous vide the duck breasts with a bit of honey and salt at around 56C for about an hour to 90 minutes (sorry, but sous vide really is the path to tender duck breasts – if you can’t sous vide, then cook them however you like, to rare or medium rare). Let them cool down a little, then fry for a few minutes on each side to brown (if you’ve done the sous vide thing).

Let them rest for 5-10 minutes, slice into 1/4″ slices.

Thinly slice the fennel.

Peel the orange and break the segments into two or three chunks each.

Quickly stirfry the duck breasts for just a short while – 30 seconds or so. Add soy and honey. Throw in the orange chunks and sliced fennel and stirfry until the fennel has wilted slightly and the orange is warm (and the duck is still somewhat rare, so start with pretty rare duck!).

And then you’re done.

I suspect this would be improved with some sesame seeds stirfried just before the duck breasts, but I haven’t tried it yet.

CourionCourion recognized as a Hot Cybersecurity Company to Watch in 2015 [Technorati links]

May 14, 2015 03:54 PM

Access Risk Management Blog | Courion

cybersecurity ventures1 1Cybersecurity Ventures, a research and market intelligence firm focused on companies in the cyber security industry, which it states is projected to grow to more than $155 billion by 2019, recently published the ‘Cybersecurity 500’, what the firm describes as a list of the world’s hottest and most innovative cyber security companies.

We’re delighted that Courion was recognized on the list.

blog.courion.com

May 13, 2015

OpenID.netCertification pilot expanded to all OIDF members [Technorati links]

May 13, 2015 09:52 AM

The OpenID Foundation has opened the OpenID Certification pilot phase to all OpenID members, as the Board previously announced we would do in May. This enables individual and non-profit members to also self-certify OpenID Connect implementations. The OpenID Board has not yet finalized beta pricing to cover the costs of certification applications during the next phase of the 2015 program. OpenID Foundation Members’ self-certification applications will be accepted at no cost during this pilot phase. We look forward to working with all members on the continued adoption of the OpenID Certification program, including individual and open source implementations.

Don Thibeau
OpenID Foundation Executive Director

May 12, 2015

Kevin MarksHow quill’s editor looks [Technorati links]

May 12, 2015 12:03 AM

a bit familiar?

May 11, 2015

Kevin MarksDoes editing UI affect writing style? [Technorati links]

May 11, 2015 11:54 PM

Listening to This Week in Google, Jeff and Gina were debating writing on Medium, with Gina sorry that people didn't use their own blogs any more, and Jeff saying that Medium's styling and editor made him want to write better to suit the medium house style.

So I wonder if Aaron’s new version of Quill, with its medium style editing for micropub, and Kyle’s Fever dream - which posts to wordpress, blogger and tumblr via micropub, could help change this.

Kevin Marks [Technorati links]

May 11, 2015 11:51 PM
having fever dreams about quill

GluuPart II: Beware of a Microsoft-Google Internet Security Oligarchy [Technorati links]

May 11, 2015 05:28 PM

monopolyman

Note: This is Part II of a three part series. Part I and III are published here and here, respectively.

Microsoft and Google agreeing on Internet Security is a good thing. Consensus on standards from leading technology companies is essential to adoption. However, at the same time, such collaboration requires the community to remain vigilant to avoid potential anti-competitive activity that may impede innovation.

Some of you may have already read my previous blog about how the nonprofit OpenID Foundation (OIDF) unfairly handed out a valuable favor to its leading corporate sponsors–participation in a special pilot program to promote a new OpenID Connect certification program. Coincidentally, the two most renown free open source implementations were left out.

For simplicity, let’s just call this episode “FOSSgate” (FOSS = Free Open Source Software).

I was discussing this situation with a friend who happens to be a law professor, and he commented that it raises concerns about anti-competitive behavior. But why are Microsoft and Google, who are bitter competitors, collaborating in the first place? The answer is best explained in a diagram that is adapted from Michael Porter’s value chain concept:

enterprise-competition-diagram

Although Microsoft and Google compete on products, services, business process, and all the other activities in the green part of the above diagram, they do not compete on middleware security, such as the mechanisms defined by the OpenID Connect standards. Non-user facing security is a “supporting activity.” For example, Google does not say that it has better API security than Microsoft (or vice versa). By collaborating on OpenID Connect, Microsoft and Google are simply saving money by pooling their resources for an an expense they anyway have to bear. There is nothing wrong with this–in fact its a beautiful thing when applied correctly.

But there is one thing better than sharing expenses–that is turning an expense into a profit center. Again, this can work great, as long as the power is not wielded in a way that discourages innovation. However, the modus operandi of large technology companies is to protect intellectual property, and create monopolies (or in this case, an oligarchy). In the OIDF board minutes from April 22, 2015, we discover that the OpenID Foundation has registered the “OpenID Registered” certification mark in the US, Canada, EU, and Japan. Such a certification mark is a typical way to create a kind of monopoly.

But how would the OIDF go from Certification Program, to monopoly? The answer is simple: get the OIDF Certification approved for certain types of government transactions, and then be the only one who can issue the required certification mark. To this end, the OIDF is diligently working. The same minutes referenced above also report: “The US federal government is planning to write a profile of OpenID Connect… Apparently the goal is to mirror the US government SAML profile.”

Can the executive director and the OIDF board be trusted to provide the leadership and the requisite amount of oversight to head-off the kind of anti-competitive behavior to which Microsoft and Google are prone, protecting the public’s trust? Was FOSS-gate a singularity, or was it the one cockroach you see, while 1,000 more are hiding in the walls? Before we acquiesce to grant the OIDF an oligarchy that controls Internet Security Certification, I think these questions need to be answered.

Perhaps some of my concerns are covered in an anti-trust statement. For example, OASIS, a model of good governance, publishes their anti-trust guidelines. I couldn’t find this document linked to the OIDF Website, just like I couldn’t find the minutes of the board meetings.

Note: This is Part II of a three part series. Part I and III are published here and here, respectively. 

KatasoftWhat the Heck is OAuth? [Technorati links]

May 11, 2015 03:00 PM

Stormpath spends a lot of time building authentication services and libraries, we’re frequently asked by developers (new and experienced alike): “What the heck is OAuth?”.

There’s a lot of confusion around what OAuth actually is.

Some people consider OAuth a login flow (like when you sign into an application with Google Login), and some people think of OAuth as a “security thing”, and don’t really know much more than that.

I’m going to walk you through what OAuth is, explain how Oauth works, and hopefully leave you with a sense of how and where Oauth can benefit your application.

What Is OAuth?

To begin at a high level, OAuth is not an API or a service: it is an open standard for authorization and any developer can implement it.

OAuth is a standard that applications (and the developers who love them) can use to provide client applications with a ‘secure delegated access’. OAuth works over HTTP and authorizes Devices, APIs, Servers and Applications with access tokens rather than credentials, which we will go over in depth below.

There are two versions of OAuth: OAuth 1.0a and OAuth2. These specifications are completely different from one another, and cannot be used together: there is no backwards compatibility between them.

Which one is more popular? Great question! Nowadays (at this time of writing), OAuth2 is no doubt the most widely used form of OAuth. So from now on, whenever I write just “OAuth”, I’m actually talking about OAuth2 — as it is most likely what you’ll be using.

Now — onto the learning!

What Does OAuth Do?

OAuth is basically a protocol that supports authorization workflows. What this means is that it gives you a way to ensure that a specific user has permissions to do something.

That’s it.

OAuth isn’t meant to do stuff like validate a user’s identity — that’s taken care of by an Authentication service. Authentication is when you validate a user’s identity (like asking for a username / password to log in), whereas authorization is when check to see what permissions an existing user already has.

Just remember that OAuth is a protocol for authorization.

How OAuth Works

There are 4 separate modes of OAuth, which are called grant types. Each mode serves a different purpose, and is used in a different way. Depending on what type of service you are building, you might need to use one or more of these grant types to make stuff work.

Let’s go over each one separately.

The Authorization Code Grant Type

The authorization code OAuth grant type is meant to be used on web servers. You’ll want to use the authorization code grant type if you are building a web application with server-side code that is NOT public. If want to implement an OAuth flow in a server-side web framework like Express.js, Flask, Django, Ruby on Rails, an Authorization Code is the way to go.

Here’s how it works:

Here’s how it typically looks:

Facebook Login

How to Use Authorization Code Grant Types

You’ll basically create a login button on your login page with a link that looks something like this:

https://login.blah.com/oauth?response_type=code&client_id=xxx&redirect_uri=xxx&scope=email

When the user clicks this button they’ll visit login.blah.com where they’ll be prompted for whatever permissions you’ve requested.

After accepting the permissions, the user will be redirected to back to your site, at whichever URL you specified in the redirect_uri parameter, along with an authorization code. Here’s how it might look:

https://yoursite.com/oauth/callback?code=xxx

You’ll then read in the code querystring value, and exchange that for an access token using the provider’s API:

POST https://api.blah.com/oauth/token?grant_type=authorization_code&code=xxx&redirect_uri=xxx&client_id=xxx&client_secret=xxx

NOTE: The client_id and client_secret stuff you see in the above examples are provided by the identity provider. When you create a Facebook or Google app, for instance, they’ll give you these values.

Once that POST request has successfully completed, you’ll then receive an access token which you can use to make real API calls to retrieve the user’s information from the identity provider.

The Implicit Grant Type

The implicit grant type is meant to be used for client-side web applications (like React.js or Angular.js) that don’t have a server-side component — or any sort of mobile application that can use a mobile web browser.

Implicit grants ideal for client-side web applications and mobile apps because this grant type doesn’t require you to store any secret key information at all — this means you can log someone into your site / app WITHOUT knowing what your application’s client_secret is.

Here’s how it works:

Here’s how it typically looks:

Facebook Login

How to Use the Implicit Grant Type

You’ll basically create a login button on your login page that contains a link that looks something like this:

https://login.blah.com/oauth?response_type=token&client_id=xxx&redirect_uri=xxx&scope=email

When the user clicks this button they’ll visit login.blah.com where they’ll be prompted for whatever permissions you’ve requested.

After accepting the permissions, the user will be redirected to back to your site, at whichever URL you specified in the redirect_uri parameter, along with an access token. Here’s how it might look:

https://yoursite.com/oauth/callback?token=xxx

You’ll then read in the token querystring value which you can use to make real API calls to retrieve the user’s information from the identity provider.

NOTE: The client_id stuff you see in the above examples are provided by the identity provider. When you create a Facebook or Google app, for instance, they’ll give you these values.

The Password Credentials Grant Type

The password credentials grant type is meant to be used for first class web applications OR mobile applications. This is ideal for official web and mobile apps for your project because you can simplify the authorization workflow by ONLY asking a user for their username and password, as opposed to redirecting them to your site, etc.

What this means is that if you have built your own OAuth service (login.yoursite.com), and then created your own OAuth client application, you could use this grant type to authenticate users for your native Android, iPhone, and web apps.

But here’s the catch: ONLY YOUR native web / mobile applications can use this method! Let’s say you are Google. It would be OK for you to use this method to authenticate users in the official Google Android and iPhone apps, but NOT OK for some other site that uses Google login to authenticate people.

The reason here is this: by using the password credentials grant type, you’ll essentially be collecting a username and password from your user directly. If you allow a third-party vendor to do this, you run the risk that they’ll store this information and use it for bad purposes (nasty!).

Here’s how it works:

Here’s how it looks:

Facebook App Login

How to Use the Password Credentials Grant Type

You’ll basically create an HTML form of some sort on your login page that accepts the user’s credentials — typically username and password.

You’ll then accept the user’s credentials, and POST them to your identity service using the following request format:

POST https://login.blah.com/oauth/token?grant_type=password&username=xxx&password=xxx&client_id=xxx

You’ll then receive an access token in the response which you can use to make real API calls to retrieve the user’s information from your OAuth service.

The Client Credentials Grant Type

The client credentials grant type is meant to be used for application code.

You’ll want to use the client credentials grant type if you are building an application that needs to perform non-user related tasks. For instance, you might want to update your application’s metadata — read in application metrics (how many users have logged into your service?) — etc.

What this means is that if you’re building an application (like a background process that doesn’t interact with a user in a web browser), this is the grant type for you!

Here’s how it works:

How to Use The Client Credentials Grant Type

You’ll fire off a single API request to the identity provider that looks something like this:

POST https://login.blah.com/oauth/token?grant_type=client_credentials&client_id=xxx&client_secret=xxx

You’ll then receive an access token in the response which you can use to make real API calls to retrieve information from the identity provider’s API service.

NOTE: The client_id and client_secret stuff you see in the above examples are provided by the identity provider. When you create a Facebook or Google app, for instance, they’ll give you these values.

Is OAuth2 Secure?

Let’s talk about OAuth security real quick: “Is OAuth2 secure?”

The answer is, unquestionably, NO! OAuth2 is NOT (inherently) SECURE. Numerous, well-known security issues with the protocol that have yet to be addressed.

If you’d like to quickly get the low down on all of the OAuth2 security issues, I’d recommend this article, written by the famed security researcher Egor Homakov.

So, should you use it anyway? That’s a huge topic we have covered briefly in our post Secure Your API The Right Way. If you need to secure an API, this post will help you choose the right protocol.

Key Takeaways

Hopefully this article has provided you with some basic OAuth knowledge. I realize there’s a lot to it, but here are some key things to remember:

Know What Grant Type to Use

If you’re building an application that integrates with another provider’s login stuff (Google, Facebook, etc.) — be sure to use the correct grant type for your situation.

If you’re building….

Don’t Use OAuth2 for Sensitive Data

If you’re building an application that holds sensitive data (like social security numbers, etc.) — consider using OAuth 1.0a instead of OAuth2 — it’s much more secure.

Use OAuth if You Need It

You should only use OAuth if you actually need it. If you are building a service where you need to use a user’s private data that is stored on another system — use OAuth. If not — you might want to rethink your approach!

There are other forms of authentication for both websites and API services that don’t require as much complexity, and can offer similar levels of protection in certain cases.

Namely: HTTP Basic Authentication and HTTP Digest Authentication.

Use Stormpath for OAuth

Our service, Stormpath, offers the password and client credential workflows as a service that you can add to your application quickly, easily, and securely. Read how to:

If you’ve got any questions, we can be reached easily by email.

And… That’s all! I hope you enjoyed yourself =)

Mark Dixon - OracleDeep Blue Defeated Garry Kasparov [Technorati links]

May 11, 2015 02:14 PM

Eighteen years ago today, on May 10, 1997, an IBM supercomputer named Deep Blue defeated chess champion Garry Kasparov in a six-game chess match, the first defeat of a reigning world chess champion to a computer under tournament conditions.

DeepBlue160pxKasparov160px

Did Deep Blue demonstrate real artificial intelligence? The opinions are mixed. I like the comments of Drew McDermott  Professor of Computer Science at Yale University:

So, what shall we say about Deep Blue? How about: It’s a “little bit” intelligent. It knows a tremendous amount about an incredibly narrow area. I have no doubt that Deep Blue’s computations differ in detail from a human grandmaster’s; but then, human grandmasters differ from each other in many ways. On the other hand, a log of Deep Blue’s computations is perfectly intelligible to chess masters; they speak the same language, as it were. That’s why the IBM team refused to give game logs to Kasparov during the match; it would be equivalent to bugging the hotel room where he discussed strategy with his seconds. Saying Deep Blue doesn’t really think about chess is like saying an airplane doesn’t really fly because it doesn’t flap its wings.

It will be fun to see what the future brings. In the mean time, I like this phrase, which I first saw on a cubicle of a customer in Tennessee, “Intelligence, even if artificial, is preferable to stupidity, no matter how genuine.”

May 10, 2015

Mark Dixon - OracleLockheed SR-71 Blackbird [Technorati links]

May 10, 2015 05:24 PM

The Lockheed SR-71 Blackbird has to be one of the coolest airplanes ever built. Fast, beautiful, mysterious … this plane is full of intrigue!

Sr71

The National Museum of the US Air Force states:

The SR-71, unofficially known as the “Blackbird,” is a long-range, advanced, strategic reconnaissance aircraft developed from the Lockheed A-12 and YF-12A aircraft. The first flight of an SR-71 took place on Dec. 22, 1964, and the first SR-71 to enter service was delivered to the 4200th (later 9th) Strategic Reconnaissance Wing at Beale Air Force Base, Calif., in January 1966. The U.S. Air Force retired its fleet of SR-71s on Jan. 26, 1990, because of a decreasing defense budget and high costs of operation. 

Throughout its nearly 24-year career, the SR-71 remained the world’s fastest and highest-flying operational aircraft. From 80,000 feet, it could survey 100,000 square miles of Earth’s surface per hour. On July 28, 1976, an SR-71 set two world records for its class — an absolute speed record of 2,193.167 mph and an absolute altitude record of 85,068.997 feet.

The closest I ever got to one of these beauties was at the Hill Aerospace Museum near Ogden, Utah. Quite a sight!

May 08, 2015

Mark Dixon - OracleDo We Need a Mobile Strategy? [Technorati links]

May 08, 2015 06:44 PM

It is quite amazing to me how many customers I visit who are really struggling with how to handle mobile devices, data and applications securely.  This week, the following cartoon came across my desk. the funny thing to me is that the cartoon was published in 2011.  Here is is 2015 and we still struggle!

Marketoonist

Mark Dixon - OracleMcdonnell XF-85 Goblin [Technorati links]

May 08, 2015 04:39 PM

I have long been fascinated with airplanes of all kinds. This post is the first of a series of photos of wacky and wonderful aircraft.

We start first with one of the coolest airplanes I have every seen, the Mcdonnell XF-85 Goblin. Only two were built and I saw one of them in the Wright Patterson Air Force Base museum back in the mid 1980’s.

From the Nation Museum of the Airforce site:

The McDonnell Aircraft Corp. developed the XF-85 Goblin “parasite” fighter to protect B-36 bombers flying beyond the range of conventional escort fighters. Planners envisioned a “parent” B-36 carrying the XF-85 in the bomb bay, and if enemy fighters attacked, the Goblin would have been lowered on a trapeze and released to combat the attackers. Once the enemy had been driven away, the Goblin would return to the B-36, hook onto the trapeze, fold its wings and be lifted back into the bomb bay. The Goblin had no landing gear, but it had a steel skid under the fuselage and small runners on the wingtips for emergency landings.

Pretty neat little airplane!

Xf85

May 07, 2015

Mark Dixon - OracleWe Passed! [Technorati links]

May 07, 2015 10:10 PM

In order to register for an interesting online service this afternoon, I had to perform an Internet speed test.  It was nice to know that we (my computer, my internet connection and I) passed quite handily!

A lot of water has passed beneath the proverbial bridge since 300 baud acoustic coupler modems!

Internetspeed

GluuPart I: No TAX on Internet Security Self-Certification [Technorati links]

May 07, 2015 05:08 PM

greedy

Note: This is Part I of a three part series. Part II and III are published here and here, respectively.

The OpenID Foundation (OIDF) recently announced a certification program.

“Google, Microsoft, ForgeRock, Ping Identity, Nomura Research Institute, and PayPal are the first industry leaders to participate in the OpenID Connect Certification program and certify that their implementations conform to one or more of the profiles of OpenID Connect standard.”

How was this elite group selected? Was it based on merit, contribution, level of patronage or simply opportunity?

A clear picture emerges from the meeting minutes. The board picked themselves and their best friends to participate in the initial pilot! Yay Board!

Unless I missed it, there was no notification or outreach to the community asking if anyone wanted to be in this prestigious pilot group that would capture the lion’s share of the press and media publicity. In the minutes, Mike Jones from Microsoft dutifully records on October 2, 2014 that “Don has created a draft workflow for self-certification and a proposed term sheet with Roland Hedberg and his university to create and deploy the conformance testing software. Don will also be visiting Microsoft, Google, and Symantec in the next few weeks and among other topics, will discuss certification with each of them. He has also already discussed it with John Bradley of Ping Identity.”

There are many such notes about further discussions regarding Certification pilot members. By October 29, 2014, the minutes note “several companies have expressed interest being among the first adopters, Forgerock, Google, Microsoft, Ping Identity and Salesforce.”

Of course Gluu has a gripe that we were not included in this group.

Gluu has been participating in OpenID Connect interop tests since 2012. These tests became the basis for the certification program. According to the tests in January 2013, the Gluu Server was the best OpenID Connect Provider (OP) implementation. Since that time, the Gluu Server has continued to test as one of the leading implementations.

Last year, Roland Hedberg, author of the current OpenID Connect certification tests (mentioned above in the meeting minutes), had this to say about the Gluu Server in one of Gluu’s press releases: “I see two main features that speak in favor of the Gluu implementation, it has passed all tests for compliance with the standard with flying colors, and it is one of the most complete implementations of OpenID Connect, making it a singularly useful tool.” Not surprisingly, Gluu’s current results are also quite good.

So despite being one of the most active partners for the real pre-release period of the certification tests, Gluu was excluded from the announcement. I also think MitreID Connect deserved to be given a chance to participate in that pilot. Justin Richer contributed greatly to writing the specification, and he wrote an implementation. In baseball, that would be like pitching and getting an RBI!

In my discussions with the OIDF board members, the justification for the select group of participants was to limit demand on the developer, Roland Hedberg. It took Gluu years to develop its OpenID Connect Provider. It’s hard to believe that a deluge of OpenID Connect Provider implementations will arise and drown the OIDF in self-certification requests. If high demand was a concern, then why is there no mention of resource bandwidth concerns in the many discussions recorded in the OIDF minutes about certification?

But it gets better. After reading the minutes, I had another realization–the reason for the certification program is to force membership in the OIDF. In fact, at first Don Thibeau, the Executive Director at the OpenID Foundation, wanted to use OIDF conformance testing to force registration in both the OIDF and the OIX (the “Open Identity Exchange”), a related organization he also runs. It was like a two-for-one! However, the OIDF board pushed back. Google even expressed concern about the OIDF membership requirement, and asked whether the OIDF would eventually relax the membership requirement.

There also seemed to be some concern about charging for the certification. Ping Identity suggested that a very small fee to financially validate the entity would add value to the program. In subsequent meetings, Ping says they thought there should be no fee for a self-certification program. Kudo’s to Ping… However these concerns did not stop the plan to use conformance testing to force OIDF membership. So while the board might maintain that there is no fee, there is clearly an intention to require membership.

Even non-profits need to have a sustaining business model. Certainly, some for-profit companies, like Nok Nok Labs, charge for standards certification. And the allure of OpenID Connect is its massive applicability. So I don’t blame the leadership of the OIDF for perhaps wondering “wouldn’t it be great if everyone who wants to show their product is OpenID Connect compliant would pay a small fee for the privilege?” The minutes clearly indicate a thought process where membership will be required for certification, or a fee equal to the membership fee will be assessed to non-members.

Its a brilliant plan–MSFT and Google reap most of the savings–their massive scale means they arithmetically have the most to gain by security costs going down. And the OIDF gets funding to make sure most of the intangible benefits (like press opportunities) are routed back to the mother ships.

The OIDF asserts that I see a conspiracy where none exists. That’s just the way it is, and we should accept it… But it is obvious to me that the OIDF’s mission should be to serve the community, not the executive director, or the corporations who occupy the board seats.

Now I know why I wasn’t elected to the OIDF board. Imagine what a pain in the neck I would have been asking all these questions? Frankly, I wonder why we need a dedicated organization, like the OpenID Connect Foundation, for a few specifications? The IETF, OASIS, Kantara or the W3C already have more generic missions.

What is the OIDF board’s advice to Gluu? Simply renew our corporate membership.

So the OIDF’s plan to generate publicity is a success. And now they want to test their business model–that the certification program will drive memberships, starting with Gluu. Our feedback is simple: Gluu will not pay your tariff.

In many ways the future of the Internet is the future of security. Based on this last experience, I am starting to question whether the OIDF has shown whether it is up to that responsibility. If their intention is to increase the quality of OpenID Connect implementations in order to increase security on the Internet, then I applaud them. But right now, I choose not to pay to participate until my concerns can be addressed.

Note: This is Part I of a three part series. Part II and III are published here and here, respectively.
 

May 06, 2015

Matthew Gertner - AllPeersHow to navigate legal issues when buying or selling a business [Technorati links]

May 06, 2015 11:18 AM

An Interview with Achim Neumann from A. Neumann & Associates, President of a leading Business Brokerage in Pennsylvania, New Jersey, Connecticut, Maryland and Delaware.

Achim Neumann from Neumann Associates Biography

Firstly, thanks so much Mr. Neumann for taking the time to answer our questions today. We’re spotlighting Mergers and Acquisitions this month and A. Neumann & Associates were referred to us as experts in M&A, knowledge on legal issues when selling a business/buying a business, etc.

Thanks for having me! My firm, A Neumann and Associates, LLC, has worked on and consulted within a wealth of business transactions, so I appreciate you reaching out for me to help answer your questions.

Firstly, what are 3 things you would quickly advise a business owner on in the preparations of deciding to sell their business? You can keep it short and sweet as we know you could probably go all day!

Most importantly, a business needs to get a fair market valuation into place. It will serve many different purposes: it will obviously establish a value. But it will also insert a discipline to collect all the proper documents needed for a sale. Further, the valuation will allow a buyer to make an offer sooner, and it will allow the business owner to sell faster.

If someone decides to hire a business broker to assist them in their acquisition or sale, how should they go about it?

When evaluating different Mergers & Acquisition professionals and business brokers, make sure that they have complete answers to all of your questions. New Jersey remains a State with no regulation or licensing required to be in the industry, thus, your evaluation becomes ever so more important to selecting the right individual. Other states have similar lack of regulations as well.

Here is a list of things to keep in mind to check up on any potential advisor’s credentials:
•    Is he/she a business broker, or merely a real estate broker attempting to sell businesses?
•    Is he/she affiliated with any key business brokerage organizations?
•    What is the educational background of the professional? Is it Verifiable?
•    Does the professional have a financial-based education, is he familiar with business/personal tax issues?
•    How long has the firm been operating for?
•    Is the broker the principal, or simply an employee with little vested interest in the the business?
•    Other than the brokerage business, has the broker run a business before and thus, can relate to your concerns?

We’ve written about this topic pretty extensively and you can read about it more on our website if you’re interested, http://www.neumannassociates.com/selecting-your-advisor.cfm

We all know legal issues arise when complicated transactions happen, especially when buying or selling a business. What is the best way to avoid a lawsuit from the beginning?

Preferably, a seller has a qualified TRANSACTION attorney in place well ahead of the contemplated sale. This will allow the seller to obtain proper legal advice all the way. The same, by the way, also applies to having a CPA involved.

We sometimes run into the situation where the seller does not have an attorney, and waits until he has an offer “on the table”. This is not an intelligent move.

Offers that are made by buyers are subject to a final Definite Agreement, drawn up typically by the sellers attorney. Such an agreement needs to be reviewed by both parties, and should outline the parameters of the deal, and should prevent lawsuits.

If someone finds themselves being served in the process of selling a business, what should they do?

The first thing they should do is contact their attorney that they should already be working with. Don’t let anyone besides a qualified business attorney give you legal advice.

What sorts of things should be in contracts to ensure a disgruntled buyer doesn’t try and sue after a transaction has occurred, and if they do, that you as the seller are protected?

Usually there are Warranties and Representations in the final Definite Agreement, under which a seller states what he/she “warrants in the sale”. Between such warranties and the prior due diligence executed by the buyer, the buyer should have a fairly good idea of what he/she is buying.

Usually, there are few law suits after a transaction, as a matter of fact, we have seen none for the transactions we closed in the past 10 years due to prior planning, performing due diligence, adhering to the law and

Thank you so much once again for answering our questions.

Thanks again for having me!

The post How to navigate legal issues when buying or selling a business appeared first on All Peers.

OpenID.netCertification Accomplishments and Next Steps [Technorati links]

May 06, 2015 08:08 AM

OpenID Certified markI’d like to take a moment and congratulate the OpenID Foundation members who made the successful OpenID Certification launch happen. By the numbers, six organizations were granted 21 certifications covering all five defined conformance profiles. See Mike Jones’ note Perspectives on the OpenID Connect Certification Launch for reflections on what we’ve accomplished and how we got here.

We applied the meme “keep simple things simple” that was the touchstone when designing OpenID Connect to its certification program. But for as much as we’ve already accomplished, there’s plenty of good things to come. The next steps are to expand the scope of the Certification program along several dimensions, per the OpenID board’s deliberately phased certification rollout plan. I’ll take the rest of this note to outline these next steps.

One dimension of the expansion is to open the program to all members, including non-profit and individual members. This second phase will be open to OpenID Foundation members, acknowledging the years of work that they’ve put into creating OpenID Connect and its certification program.

Closely related to this, the foundation is working to determine our costs for the certification program in order to establish a beta pricing program for the second phase. The board is on record as stating that pricing will be designed with two goals in mind: covering our costs and helping to promote the OpenID Connect brand and adoption.

Putting a timeline on this, the Executive Committee plans to recommend a beta pricing program for the second phase during its meeting on June 4th for adoption by the Board at its meeting during the Cloud Identity Summit on June 10th. We look forward to seeing certifications of open source, individuals’, and non-profits’ implementations during this phase, as well as continued certifications by organizations.

Another dimension of the expansion is to begin relying party certifications. If you have a relying party implementation, we highly encourage you to join us in testing the tests, just like the pilot participants did for the OpenID Provider certification test suite. Please contact me if you’re interested.

See the FAQ for additional information on OpenID Certification. Again, congratulations on what we’ve already accomplished. I look forward to the increasing adoption and quality of OpenID Connect implementations that the certification program is already helping to achieve.

Ludovic Poitou - ForgeRockOpenDJ Nightly Builds… [Technorati links]

May 06, 2015 07:19 AM

For the last few months, there’s been a lot of changes in the OpenDJ project in order to prepare the next major release : OpenDJ 3.0.0. While doing so, we’ve tried to keep options opened and continued to make most of the changes in the trunk/opends part, keeping the possibility to release a 2.8 version. And we’ve made tons of work in branches as well as in trunk/opendj. As part of the move to the trunk, we’ve changed the factory to now build with Maven. Finally, at the end of last week, we’ve made the switch on the nightly builds and are now building what will be OpenDJ 3, from the trunk.

For those who are regularly checking the nightly builds, the biggest change is going to be the version number. The new build is now showing a development version of 3.0.

$ start-ds -V
OpenDJ 3.0.0-SNAPSHOT
Build 20150506012828
--
 Name Build number Revision number
Extension: snmp-mib2605 3.0.0-SNAPSHOT 12206

We are still missing the MSI package (sorry to the Windows users, we are trying to find the Maven plugin that will allow us to build the package in a similar way as previously with ant), and we are also looking at restoring the JNLP based installer, but otherwise OpenDJ 3 nightly builds are available for testing, in different forms : Zip, RPM and Debian packages.

OpenDJ Nightly Builds at ForgeRock.org

We have also changed the minimal version of Java required to run the OpenDJ LDAP directory server. Java 7 or higher is required.

We’re looking forward to getting your feedback.


Filed under: Directory Services Tagged: build, builds, directory-server, ForgeRock, java, ldap, maven, nightly, opendj, opensource, snapshot
May 05, 2015

Radiant LogicWhere Are the Customers’ Yachts? [Technorati links]

May 05, 2015 10:39 PM

Current Web Access Management Solutions Will Work for the Customer Identity Market—If We Solve the Integration Challenge

I find it ironic that within the realm of IAM/WAM, we’re only now discovering the world of customer identity, when the need for securing customer identity has existed since the first business transactions began happening on the Internet. After all, the e-commerce juggernauts from Amazon to eBay and beyond have figured out the nuances of customer registration, streamlined logons, secure transactions, and smart shopping carts which personalize the experience, remembering everything you’ve searched and shopped for, in order to serve up even more targeted options at the moment of purchase.

It reminds me of a parable from a classic book on investing*: Imagine a Wall Street insider at the Battery in New York, pointing out all the yachts that belong to notorious investment bankers, brokers, and hedge fund managers. After watching for a while, one lone voice pipes up and asks: “That’s great—but where are the customers’ yachts?

Could this new focus on “customer identity” be an attempt by IAM/packaged WAM vendors to push their solution toward what they believe is a new market? Let’s take a look at what would justify their bets in the growing customer identity space.

Customer Identity: The Case for the WAM Vendors

The move to digitization is unstoppable for many companies and sectors of the economy, opening opportunities for WAM vendors to go beyond the enterprise employee base. As traditional brick and mortar companies move to a new digitized distribution model based on ecommerce, they’re looking for ways to reach customers without pushing IT resources into areas where they have no expertise.

While there are many large ecommerce sites that have “grown their own” when it comes to security, a large part of this growing demand will not have the depth and experience of the larger Internet “properties.” So a packaged solution for security makes a lot of sense, with less expense and lower risks. And certainly, the experience of enterprise WAM/federation vendors, with multiple packaged solutions to address the identity lifecycle, could be transferred to this new market with success. However, such a transition will need to address a key challenge at the level of the identity infrastructure.

The Dilemma for WAM Vendors: Directory-Optimized Solutions in a World of SQL

As we know, the current IAM/WAM stack is tightly tied to LDAP and Active Directory—these largely employee-based data stores are bolted into the DNA of our discipline, and, in the case of AD, offer an authoritative list of employees that’s at the center of the local network. This becomes an issue when we look at where the bulk of customer identities and attributes are stored: in a SQL database.

So if SQL databases and APIs are the way to access customer identities, we should ask ourselves if the current stack of WAM/federation solutions, built on LDAP/AD to target employees, would work well as well with customers. Otherwise, we’re just selling new clothes to the emperor—and this new gear is just as invisible as those customers’ yachts.

Stay tuned over the next few weeks as I dive deeper into this topic—and suggest solutions that will help IAM vendors play in the increasingly vital world of customer identity data services.

*Check out “Where Are the Customers’ Yachts: or A Good Hard Look at Wall Street” by Fred Schwed. A great read—and it’s even funny!

SHARE
facebooktwittergoogle_pluslinkedinmail

The post Where Are the Customers’ Yachts? appeared first on Radiant Logic, Inc

Mark Dixon - OracleKuppingerDole: 8 Fundamentals for Digital Risk Mitigation [Technorati links]

May 05, 2015 08:45 PM

Mk

Martin Kuppinger, founder and Principal Analyst at KuppingerCole recently spoke in his keynote presentation at the European Identity & Cloud Conference about how IT has to transform and how Information Security can become a business enabler for the Digital Transformation of Business

He presented eight “Fundamentals for Digital Risk Mitigation” 

  1. Digital Transformation affects every organization 
  2. Digital Transformation is here to stay
  3. Digital Transformation is more than just Internet of Things (IoT) 
  4. Digital Transformation mandates Organizational Change
  5. Everything & Everyone becomes connected 
  6. Security and Safety is not a dichotomy 
  7. Security is a risk and an opportunity 
  8. Identity is the glue and access control is what companies need

I particularly like his statements about security being both risk and opportunity and that “Identity is the glue” that holds things together.

Wish I could have been there to hear it in person.

Mark Dixon - OracleFirst American in Space – May 5, 1961 [Technorati links]

May 05, 2015 08:24 PM

Fifty four years ago today, on May 5, 1961, a long time before I knew anything about Cinco de MayoMercury Astronaut Alan B. Shepard Jr. blasted off in his Freedom 7 capsule atop a Mercury-Redstone rocket. His 15-minute sub-orbital flight made him the first American in space

His flight further fueled my love for space travel that had been building since the Sputnik and Vanguard satellites were launched a few years previously.

 

Alan Shepard, Mercury-Redstone Rocket

Kantara InitiativeKantara UMA Standard Achieves V1.0 Status, Signifying A Major Milestone for Privacy and Access Control [Technorati links]

May 05, 2015 12:55 PM

Kantara Initiative is calling on organizations to implement User-Managed Access in applications and IoT systems

Piscataway, NJ, May 5, 2015 – Kantara Initiative announces that the User-Managed Access (UMA) Version 1.0 specifications have achieved the status of Kantara Initiative Recommendations through an overwhelming show of support from the organization’s Members. To mark this milestone, Kantara will be holding a free live webcast on May 14 at 9am Pacific.

Developed through an open and transparent standards-based approach, the UMA web protocol enables both privacy-enhancing consumer-controlled scenarios for release of personal data and next-generation business scenarios for access management. The UMA Work Group has identified a growing variety of use cases, including patient-centric health data sharing, citizen-to-government attribute control, student-consented data sharing, corporate authorization-as-a-service, API security, Internet of Things access control, and more.

“UMA has been generating industry attention with good reason. UMA bridges a critical gap by focusing on customer and citizen engagement to transform privacy considerations into real business development opportunities,” said Joni Brennan, Executive Director, Kantara Initiative.

UMA is an OAuth-based protocol designed to give a web user a unified control point for authorizing who and what can get access to their online personal data.  By letting a user lodge policies with a central authorization service that requires a requester “trust elevation” (for example, proving who they are or promising to adhere to embargoes) before that requester can access data, UMA enables privacy controls that are individual-empowering – an idea that has perhaps gotten lost in the rush to corporate privacy practices that have focused on compliance.

This model enables individuals interacting with the web to conveniently reuse “sharing circles” and set up criteria for access at a single place, referred to as the UMA authorization server, and then go about their lives. For enterprises, deploying UMA allows applications to be loosely coupled to authorization methods, significantly reducing complexity, and to make the process of access decision-making more dynamic.

“Existing notice-and-consent paradigms of privacy have begun to fail, as evidenced by the many consumers and citizens who feel they have lost control of how companies collect and use their personal information,” said Eve Maler, ForgeRock’s VP of Innovation & Emerging Technology and UMA Work Group Chair. “We’re excited that UMA’s features for asynchronous and centralized consent have matured to reach V1.0 status.”

“The future Internet is very much about consumer personal data as an important part of the broader data-driven economy ecosystem. If personal data is truly a digital asset, then consumers need to ‘own’ and control access to their various data repositories on the Internet. The UMA protocol provides this owner-centric control for sharing of data and resources at Internet scale,” says Thomas Hardjono, Executive Director of the MIT Kerberos & Internet Trust Consortium and UMA Work Group specification editor.

“With the growing importance of personal data on the Internet, there is a clear need for new ways to allow individual users be in control of their data as an economic asset.” says Dr. Maciej Machulak, Chief Identity Architect of Synergetics and UMA Work Group Vice-Chair. “UMA can become the very basis for the profound trust assurance and notably the trust perception with end-users and organizations, that is required to finally introduce end-users as genuine stakeholders in their own processes and the integration point of their own data.”

Companies, organizations, and individuals can get involved by joining Kantara Initiative and the UMA Work Group, taking part in planned interoperability testing, and attending the webcast.

“In the Digital Economy where personal data is the new currency, User-Managed Access (UMA) provides a unique vision to empower individual more effectively and efficiently and enables a new approach to secure and protect distributed resources, unlocking the value of personal data.” Said Domenico Catalano, Oracle

“UMA promotes privacy by facilitating access by reference instead of by copy and, most important, by shifting access controls away from inscrutable prior consent to user-transparent authorization.” Said Adrian Gropper, MD CTO, Patient Privacy Rights.

“UMA is the first standard to enable centralized API access management for individuals or organizations. The promise of UMA is to enable the consolidation of security for a diverse group of cloud services. Combined with OpenID Connect for client and person identification, the Internet now has a modern standards infrastructure for Web and mobile authentication and authorization.” Said Mike Schwartz, Founder & CEO, Gluu

“UMA is a major step forward in giving individuals control over their own personal data on the internet. It is a key building block of an environment where people can continuously control access to their sensitive data, rather than simply handing that data over to vendors and hoping they don’t misuse it (or lose it).” Said Gil Kirkpatrick CTO ViewDS Identity Solutions

Kantara Initiative provides strategic vision and real world innovation for the digital identity transformation. Developing initiatives including: Identity Relationship Management, User-Managed Access (EIC Award Winner for Innovation in Information Security 2014), Identities of Things, and Minimum Viable Consent Receipt, Kantara Initiative connects a global, open, and transparent leadership community. Luminaries from organizations including: CA Technologies, Experian, ForgeRock, IEEE-SA, Internet Society, Radiant Logic and SecureKey drive strategic insights to progress the transformational elements needed to leverage borderless Identity for IoT, access control, context, and consent.

Mark Dixon - OracleIAM Euphemism: Opportunity Rich Environment [Technorati links]

May 05, 2015 03:36 AM

Recently I heard a  executive who had been newly hired by a company describe their current Identity and Access Management System as an “Opportunity Rich Environment”. Somehow that sounds better than “highly manual, disjointed, insecure and error-prone,” doesn’t it?