September 21, 2014

Anil JohnWho Else Wants a Portable Token as the First Authentication Factor? [Technorati links]

September 21, 2014 08:30 PM

There is a great deal of interest when delivering digital public services to leverage a strong token, ideally one that has already been obtained by or issued to an individual, across multiple relying parties. This blog post identifies some of the challenges to overcome to enable a true bring-your-own-token experience.

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.

These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer.

Julian BondWhen we marched against the Iraq war, the demands were fairly simple even if they were ignored. [Technorati links]

September 21, 2014 07:19 AM
When we marched against the Iraq war, the demands were fairly simple even if they were ignored.

But when we march against "Climate Change" what are we asking for? What do we want to happen next?

I suspect marching about climate change is like dancing about architecture. But if it makes us feel better, maybe it's still worth while.
 Global »

[from: Google+ Posts]
September 20, 2014

Julian BondNew Taboos [Technorati links]

September 20, 2014 11:55 AM
New Taboos

From an essay by John Shirley.

What if the phrase "Obscene Profits" were not just a figure of speech?  What if the practice of amassing huge profits while exploiting one's employees, or while contaminating the environment, or while lying to the public, was actually regarded as revolting, and the people who engaged in such practices were shunned as pariahs?

We need some new taboos that we reject utterly. That make us sick. That we will not allow under any circumstances. That are never acceptable. No end ever justifies these means. In the words of Rorschach from Watchmen, "Never compromise. Not even in the face of Armageddon".

Here's a short and incomplete list of possible new taboos:-
1) Polluting or toxifying the environment. Particularly by corporate action for profit but also by individual action.
2) Lying or deceiving for profit. Especially to manipulate children for profit.
3) Using political influence for personal gain.
4) Hiding someone else's theft, fraud, dishonesty or pollution to protect one's own part in the system.
5) Discriminating on the basis of race, gender or sexual orientation.
6) Making unreasonably large profits. eg. by taking advantage of a monopoly position to price gouge, or by avoiding tax, or paying absurd top salaries
7) Exploiting workers via uneconomic wages or contracts. eg zero hour contracts or paying minimum wage as opposed to living wage.
8) Exploiting workers via unsafe workplaces and practices for profit
9) Torture under any circumstances
10) Engaging in warfare except in the most dire necessity

These aren't hard. They're just the basic kindergarten rules of behaviour we teach kids.
- Don't poison other children
- Don't lie
- Don't steal
- Don't hurt other kids just to get what you want
- Don't take more than your share of the pudding

So now apply them to adults.
 New Taboos (Outspoken Authors): John Shirley: 9781604867619: Books »
New Taboos (Outspoken Authors) [John Shirley] on *FREE* shipping on qualifying offers.

Mixing outlaw humor, sci-fi adventure, and cutting social criticism

[from: Google+ Posts]
September 19, 2014

Ludovic Poitou - ForgeRockSome OpenIG related articles… [Technorati links]

September 19, 2014 07:25 AM

OpenIGMy coworkers have been better than me at writing blog articles about OpenIG (at least faster).

Here are a few links :

Simon Moffat describes the benefits of OAuth2.0 and OpenID Connect and how to start using those with OpenIG 3.0.

Warren Strange went a little bit further and with a short introduction to OpenIG, made available on GitHub sample configuration files for OpenIG 3.0 to start using OpenID Connect.

Mark, who run ForgeRock documentation team, describes the improvements done on the Introduction section of the OpenIG docs that we’re making based on received feedback since the release of OpenIG 3.0.

Filed under: Identity Gateway Tagged: ForgeRock, identity, identity gateway, openig, opensource

Anil JohnPlease Take My 2014 Reader Survey [Technorati links]

September 19, 2014 03:30 AM

I want to make the content on my blog more relevant to your needs and interests. To do that, would you please take a few minutes to fill out my reader survey?

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.

These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer.

September 18, 2014

Ludovic Poitou - ForgeRockNew ForgeRock product available : OpenIG 3.0 [Technorati links]

September 18, 2014 04:11 PM

Since the beginning of the year, I’ve taken an additional responsibility at ForgeRock: Product Management for a new product finally named ForgeRock Open Identity Gateway (built from the OpenIG open source project).

OpenIG is not really a new project, as it’s been an optional module of OpenAM for the last 2 years. But with a new engineering team based in Grenoble, we’ve taken the project on a new trajectory and made a full product out of it.

OpenIGOpenIG 3.0.0 was publicly released on August 11th and announced here and there. But as I was on holidays with the family, I had not written a blog post article about it.

So what is OpenIG and what’s new in the 3.0 release ?

OpenIG is a web and API access management solution that allows you to protect enterprise applications and APIs using open standards such as OAuth 2.0, OpenID Connect and SAMLv2.

Enhanced from the previous version are the Password Capture and Replay and SAMLv2 federation support features. But OpenIG 3.0 also brings several new features:

I’ve presented publicly the new product and features this week through a Webinar. The recording is now available, and so is the deck of slides that I’ve used.

You can download OpenIG 3.0 from, or if you would like to preview the enhancements that we’ve already started for the 3.1 release, get a nightly build from

Play with it and let us know how it is working for you, either by email, using a blog post or writing an article on our wiki. I will be reviewing them, relaying and advertising your work. And I’m also preparing a surprise for the authors of the most outstanding use cases !

I’m looking forward to hear from you.

Filed under: Identity, Identity Gateway Tagged: authentication, authorization, ForgeRock, gateway, identity, oauth2, openidconnect, openig, opensource, product, release, samlv2, security

Nat Sakimura車輪は丸くなったのか〜ID関連標準の成熟度と動向 [Technorati links]

September 18, 2014 12:58 PM

ID&ITのサイトは仮題のままですが、明日、ANAホテルで「車輪は丸くなったのか〜ID関連標準の成熟度と動向」というタイトルで30分ほどスピーチさせていただきます。セッション番号は GE-05 です。外タレ、ナット・サキムラとしてです。


内容は、3月までガートナーのIdentity関連のアナリストだったイアン・グレイザーから独自に入手したCloud Identity Summit基調講演のスライドのネタを下敷きにして、彼の考え、私の考え、はたまた、元米国大統領サイバーセキュリティー特別補佐官のハワード・シュミット氏との朝食会で話したことなどを交えながら、認証、認可、属性、プロビジョニング、の国際標準の状況を「今使えるのか」という観点も含めながら紹介します。

外タレとしてなので、同時通訳を要求したのですが、予算厳しきおり認められませんで、一人同時通訳による日本語でお届けいたしますw。はい。それじゃぁ「外タレ」じゃなくて「ヘタレ」ですね。それでも果敢に最初のスライドは英語で入りますんで、生ぬるい笑いをお願いします (_o_)。

Do we have a round wheel yet?

September 17, 2014

Ian GlazerFinding your identity (content) at Dreamforce [Technorati links]

September 17, 2014 10:07 PM

Dreamforce is simply a force of nature (excuse the pun.) There are more sessions (1,400+) then you could possibly attend even if you clone yourself a few times over. And that’s not even including some amazing keynotes. Needless to say there’s a ton to occupy your time when you come join us.

The Salesforce Identity team has been putting together some awesome sessions. Interested in topics such as single sign-on for mobile applications, stronger authentication, or getting more out of Active Directory? You need to check out our sessions!

I’ve put together a handy list of all of the identity and access management content at Dreamforce 14. Hope you find it helpful and I cannot wait to meet all of the Salesforce community grappling with identity management issues.

Monday – October 13th

Tuesday – October 14th

Wednesday – October 15th

Thursday – October 16th

 See you in October!

Ian Glazer [Technorati links]

September 17, 2014 10:05 PM

Dreamforce is simply a force of nature (excuse the pun.) There are more sessions (1,400+) then you could possibly attend even if you clone yourself a few times over. And that’s not even including some amazing keynotes. Needless to say there’s a ton to occupy your time when you come join us.

The Salesforce Identity team has been putting together some awesome sessions. Interested in topics such as single sign-on for mobile applications, stronger authentication, or getting more out of Active Directory? You need to check out our sessions!

I’ve put together a handy list of all of the identity and access management content at Dreamforce 14. Hope you find it helpful and I cannot wait to meet all of the Salesforce community grappling with identity management issues.

Monday – October 13th

Tuesday – October 14th

Wednesday – October 15th

Thursday – October 16th

 See you in October!

Nishant Kaushik - OracleMy Relationship with Metadata: It’s Complicated! [Technorati links]

September 17, 2014 09:27 PM

Ever since the Snowden revelations broke, there has been a lot of interest in metadata, with a lot of ink (or should that be bytes?) devoted to defining exactly what it is, where it can be gathered from, who is capable (and how) of doing said gathering, and most importantly of all, if it is even important enough to warrant all the discussion. Official statements of “We’re only collecting metadata” have attempted to downplay the significance and privacy implications of the metadata collection. Organizations like the EFF have tried to counter that with simple to understand examples (like the ones below) that show how a conclusion could be drawn by having access to just the metadata and not the data (the content).


Debunking the Myth of “It’s Just Metadata”. With Data

And today, I read the most easy-to-understand account of just how much can be gleaned from metadata. A group of researchers were given access to “the same type of metadata that intelligence agencies would collect, including phone and email header information” for just one person, Ton Siedsma, for the period of just one week. They gathered this metadata by installing a data-collecting app on his phone. Here’s what they were able to do with it:

As the article points out, the intelligence agencies have access to a lot more metadata (in volume and over time), and much more sophisticated ways to analyze said metadata. So you can see why all the privacy advocates are raising alarms about this.


All this, and we haven’t even touched about all the other organizations that are able to gather this metadata, and whose business models are dependent on selling data and user dossiers to advertisers and other data brokers.

And Yet, I Can Haz Metadata?

I_luv_Metadata_buttonWith all this, you’d think that I, with all the privacy related advocacy that I do on Twitter, would hate metadata. But the fact is that it’s a complicated relationship. In looking at the future of Security, I’ve talked recently about how we can make it possible for us to have good security that does not negatively impact usability. But that model relies on doing more work in the background using environmental, transactional and behavioral information – aka metadata. Bob Blakley long ago talked about the move from Authentication to Recognition, which relies on continuous data gathering through different sensors to help in identifying the person or device interacting with the service. Most multi-factor authentication and risk-analysis services are already there, and going deeper.

All of this means that the security frameworks enterprises rely on will need to be able to gather and have access to all this metadata. This was much easier in the days of employer issued laptops and phones. BYOD and IoT completely change the landscape by creating new concerns regarding the what, when and how of metadata gathering by enterprises. Commercial entities also have the need to make their offerings more secure, which is to the benefit of their customers. But how does that mesh with the need that creates to gather metadata about their customers, a need that would ordinarily get a viscerally negative reaction if disclosed? The individual me is constantly having vigorous debates on this topic with the security practitioner me, leading to many amused (and some alarmed) glances from my fellow subway riders. At my core, I’m driven by the belief that we can find a way to balance the metadata gathering necessary to support the security models we’re advocating while giving individuals the necessary controls to manage and preserve their privacy in an informed way.

One thing is clear. Because one person’s metadata is another person’s data, enterprises need to start dealing with the collection, disclosure, usage and protection requirements of this PII (yes, I just classified Metadata as PII. Let the flame wars begin). As are laws. And engineers. It is likely going to get hidden inside those interminable ToS documents nobody ever reads. And employment contracts.

It’s going to be interesting for a while. And complicated.

OpenID.netGeneral Availability of Microsoft OpenID Connect Identity Provider [Technorati links]

September 17, 2014 02:45 PM

Microsoft has announced the general availability of the Azure Active Directory OpenID Connect Identity Provider.  It supports the discovery of provider information as well as session management (logout).  On this occasion, the OpenID Foundation wants to recognize Microsoft for its contributions to the development of the OpenID Connect specifications and congratulate them on the general availability of their OpenID Provider.

Don Thibeau
OpenID Foundation Executive Director

OpenID.netReview of Proposed Errata to OpenID Connect Specifications [Technorati links]

September 17, 2014 01:05 AM

The OpenID Connect Working Group recommends the approval of Errata to the following specifications:

An Errata version of a specification incorporates corrections identified after the Final Specification was published. This note starts the 45 day public review period for the specification drafts in accordance with the OpenID Foundation IPR policies and procedures. This review period will end on Friday, October 31, 2014. Unless issues are identified during the review that the working group believes must be addressed by revising the drafts, this review period will be followed by a seven day voting period during which OpenID Foundation members will vote on whether to approve these drafts as OpenID Errata Drafts. For the convenience of members, voting may begin up to two weeks before October 31st, with the voting period still ending on Friday, November 7, 2014.

These specifications incorporating Errata are available at:

The corresponding approved Final Specifications are available at:

A description of OpenID Connect can be found at The working group page is Information on joining the OpenID Foundation can be found at If you’re not a current OpenID Foundation member, please consider joining to participate in the approval vote.

You can send feedback on the specifications in a way that enables the working group to act upon your feedback by (1) signing the contribution agreement at to join the working group (please specify that you are joining the “AB+Connect” working group on your contribution agreement), (2) joining the working group mailing list at, and (3) sending your feedback to the list.

A summary of the errata corrections applied is:

– Michael B. Jones – OpenID Foundation Board Secretary

OpenID.netReview of Proposed Implementer’s Draft of OpenID 2.0 to OpenID Connect Migration Specification [Technorati links]

September 17, 2014 12:59 AM

The OpenID Connect Working Group recommends approval of the following specification as an OpenID Implementer’s Draft:

An Implementer’s Draft is a stable version of a specification providing intellectual property protections to implementers of the specification. This note starts the 45 day public review period for the specification drafts in accordance with the OpenID Foundation IPR policies and procedures. This review period will end on Friday, October 31, 2014. Unless issues are identified during the review that the working group believes must be addressed by revising the drafts, this review period will be followed by a seven day voting period during which OpenID Foundation members will vote on whether to approve these drafts as OpenID Implementer’s Drafts. For the convenience of members, voting may begin up to two weeks before October 31st, with the voting period still ending on Friday, November 7, 2014.

This specification is available at:

A description of OpenID Connect can be found at The working group page is Information on joining the OpenID Foundation can be found at If you’re not a current OpenID Foundation member, please consider joining to participate in the approval vote.

You can send feedback on the specifications in a way that enables the working group to act upon your feedback by (1) signing the contribution agreement at to join the working group (please specify that you are joining the “AB+Connect” working group on your contribution agreement), (2) joining the working group mailing list at, and (3) sending your feedback to the list.

– Michael B. Jones – OpenID Foundation Board Secretary

September 16, 2014

GluuThe Gluu Server: the WordPress of IAM [Technorati links]

September 16, 2014 05:20 PM


One of our goals for the Gluu Server is to replicate the success of WordPress, the popular open source content management system (CMS) used by more than 72 million domains on the Internet (including this one!).

Along the way, we’ve identified many similarities between the two platforms.

Just like a CMS, every domain on the Internet needs a solution for authenticating people and controlling access to resources. Also, a CMS and an identity and access management (IAM) system are invariably intertwined: if you think of a CMS as the house, the IAM system is the lock that restricts or enables access to the appropriate resources and people.

Due to the inherently custom nature of both a CMS and an IAM system, only an extremely flexible, scalable and open solution backed by a large community of developers can broadly serve market needs.

WordPress provides the foundation for Fortune 500 organizational websites all the way down to individual blogs. While the developers of WordPress at Automattic provide enterprise support, training and more–just as Gluu does for the Gluu Server–many small and medium businesses are able to utilize the platform thanks to the large community of independent and affordable service providers.

As modern security needs and access to third-party apps continue to make IAM a similarly universal requirement, a utility open source platform supported by a strong community of developers is needed.

Currently the market for identity and access management is widely distributed with no one solution able to meet the needs of the majority of organizations. SaaS solutions can be quick and affordable, but not flexible or secure enough for many organizations. Proprietary enterprise software, on the other hand, may be good for the Fortune 500 but is too expensive for general market adoption.

The few open source solutions that are available today are either not comprehensive enough to provide a unified solution, forcing developers to mix and match several existing open source projects to meet their needs, or have restrictive licenses that make the software enterprise priced when used in production. Both limitations reduce usability and community development.

Like WordPress, the Gluu Server is free to use in production, provides a large enough feature set for most organizations out of the box, and is extensible enough to build custom features as needed. By enabling people to use and build upon the Gluu Server for free, we envision a worldwide community of security professionals that are able to help organizations with access management challenges using the Gluu Server.

Two business are never exactly alike. Core software like a CMS or an IAM system needs to be flexible, affordable and open to serve the hundreds of millions of new websites being created each year.

As the pain and frustration of dealing with insufficient access management solutions continues to grow, we see the Gluu Server, with its industry leading feature set and free open source license, securing a dominant place in the IAM market just as WordPress did in the CMS market.

KatasoftNew Pricing: More API Calls, More Options! [Technorati links]

September 16, 2014 03:00 PM

New Stormpath Pricing

We’re excited to announce new pricing tiers for Stormpath!

Since launching last year, we have kept our pricing stable and simple as we watched early customers use the API. Our goal: learn what our customers care about and then tailor pricing to people’s actual usage and concerns. Now we know!

Raised Included API Calls

The biggest piece of feedback we received: it’s hard to know how many API calls you need before building your app. To that end, we raised the API calls included at each tier, to 1M, 5M, 10M or more, respectively. So, no matter what you’re building, you should have more than enough API calls to get started and we can affordably scale with you. If you get a big signup surge, you will have more important things to deal with.

An API call to Stormpath just got a lot cheaper.

Flat Rate for Additional API Calls

We flattened the rate for additional API calls. Any calls over what’s included in your tier will be billed at $0.20/1000 API calls, a huge decrease relative to our prior plans. A flat rate is just easier for everyone.

Cheaper Access to the Full API

We lowered the cost of the tier that gets you full access to the API – features like Hosted Login screens and LDAP for $149 per month (instead of $295), per app. It comes with 5M API calls, so fire away. This is a great option for straight forward applications, with 5x the API calls of the old Premium Tier and access to the same features.

One Big Change

The lowest paid tier is increasing from $19 to $49, and is now limited to one application, with the number of included API calls raised to 1M. Why? Most customers at this tier are building proof-of-concept apps or one small app, and we need to balance affordability with access. This new tier gives customers a lot more room to load test, and also raises the API calls included. And at $49/month we can afford to give them great service.

Free Developer Tier Is Still Free. Forever.

The developer tier still gets 100,000 API calls each month, unlimited users and groups, and social login features for one application.

Current customers keep their old plans, but most will save money on the new model. Contact to find out how.

Customers with Sprint pricing for Startups can keep those perks on their current pricing plan or port the discount to a new tier.

The “Stormpath Admin” Application that is created by default will not count toward per-app pricing.

As ever, we welcome your feedback in the comments below. Our goal at Stormpath is to make developers’ lives easier with a “no brainer” service. We hope this new pricing plan means everyone developing against Stormpath can spend less time worrying about API limits, and more time building awesome products!

Julian BondNew Taboos (Outspoken Authors) by John Shirley [Technorati links]

September 16, 2014 07:34 AM
PM Press (2013), Edition: First Edition, Paperback, 128 pages
[from: Librarything]

Julian BondI'll just leave this here. [Technorati links]

September 16, 2014 07:29 AM
I'll just leave this here.

"If there is no centre, we're all on the edge."

From this review.
 Review: Various - Worth The Weight Vol. 2 »
Punch Drunk's second compilation reflects the dubstep diaspora a couple of generations deep.

[from: Google+ Posts]

Julian BondI was looking for something else and just found this from 2004. Dig those forgotten cultural references... [Technorati links]

September 16, 2014 07:08 AM
I was looking for something else and just found this from 2004. Dig those forgotten cultural references! Not entirely sure where it came from so needs citation.

The revolution will be blogged.

You will be able to stay home, brother.
You will be able to plug in, turn on and get your own IP.
You will be able to lose yourself on generic V.i.a.g.r.a and Prozac,
Skip out for a Frapuccino during the free pr0n download,
Because the revolution will be blogged.

The revolution will be blogged.
The revolution will not be brought to you by the NY Times
in 4 parts with commercial popups after a one time registration.
The revolution will not show you pictures of Bush
landing on a carrier and leading a charge by John
Ashcroft, Douglas Rumsfeld and Dick Chaney to eat
the profits stolen on the way to Iraq.
The revolution will be blogged.

The revolution will not be brought to you by
the RIAA and will not star Britney
Spears and Paris Hilton or Eminem and Madonna.
The revolution will not give your iPod a new battery.
The revolution will not lock you in with DRM.
The Atkins diet will not make you look five pounds
thinner, because the revolution will be blogged, Brother.

There will be a webcam of you and Winona Ryder
pushing that shopping cart down the block on the dead run,
and Marsha Stewart trying to sneak the drug money past the SEC.
You will be able to predict the winner at 8:32
and get reports from 563 districts because,
The revolution will be blogged.

There will be phonecam pictures of pigs shooting down
brothers on TextAmerica.
There will be phonecam pictures of pigs shooting down
brothers on TextAmerica.
There will be pictures of Rush Limbaugh being
run out of ABC on a rail for "Addiction to prescription drugs".
There will be slow motion and 360 degree QTVR of John
Kerry strolling through Watts in a doubleknit leisure suit
that he had been saving
For just the proper occasion.

Gap, Starbucks, and Hooters will no longer be so damned relevant,
and women will not care if Aleks finally gets down with
Carrie on Sex in the City because people of colour, or even
no colour at all, will be online looking for a brighter day.
The revolution will be blogged.

There will be no highlights on the eleven o'clock
news and no pictures of heavily pierced women
activists or Condoleeza Rice blowing her nose.
The theme song will not be written by Moby,
the Red Hot Chili Peppers, nor sung by Beyonce, Justin
Timberlake, Sheryl Crow, Alicia Keys, or R.E.M.
The revolution will be blogged.

The revolution will not be right back after a message from our leader about weapons of mass destruction, homeland security, or the axis of evil.
You will not have to worry about anthrax in your
post, armed sky marshalls, or biometric ID cards.
The revolution will not give you a transportable TV entertainment center.
The revolution will not help you to be all you want to be.
The revolution will put you in control of the keyboard.

The revolution will be blogged, will be blogged,
will be blogged, will be blogged.
The revolution will be no re-run brothers;
The revolution will be live.
[from: Google+ Posts]

Ludovic Poitou - ForgeRock4 years ! [Technorati links]

September 16, 2014 07:00 AM

ForgeRock logoFour years ago, exactly I was free from all obligations with my previous employer and started to work for ForgeRock.

My first goal was to setup the French subsidiary and start thinking of building a team to take on development of what we named a coming later OpenDJ.

4 years later, I look at where we are with ForgeRock and I feel amazed and really proud of what we’ve built. vertical-logo_webForgeRock is now well established global business with several hundreds of customers across the globe, and plenty of opportunities for growth. The company has grown to more than 200 employees worldwide and still expanding. The ForgeRock Grenoble Engineering Center has moved to new offices end of May and counts 13, soon 14 employees and we’re still hiring.

Thanks to the ForgeRock founders for the opportunity and let’s keep rocking !!!
ForgeRock CEOForgeRock CTO and Founder

Filed under: General Tagged: ForgeRock, france, grenoble, identity, opendj, startup
September 15, 2014

Nat SakimuraIT&ID 2014に出演します [Technorati links]

September 15, 2014 11:38 PM

IT & ID 2014

9/17(水)に大阪、9/19(金)に東京で開催される『IT&ID 2014』に、外タレの枠で出演します。

General Session [GE-05] です。さて、お約束の「** is DEAD」は出るのか?!






OpenID Foundation


Mr. Nat Sakimura

 大阪  9/17 15:40~16:10 : ROOM A & B
 東京  9/19 15:50~16:20 : ROOM A & B


また、例年通り、クロージングパネルにも登場します。General Session [GE-07]です。



スマホだけではなく、クラウドでも BYODにしてもその特殊性が目立ち始めた日本の IT市場。特殊性の原因とこの特殊性を認識した上で IT部門や SIerはどう対処すべきか、毎年恒例のメンバーでパネルディスカッションしていただきます。


株式会社 企

代表取締役 クロサカ タツヤ 氏

OpenID Foundation 理事長
Kantara Initiative 理事
株式会社 野村総合研究所 オープンソースソリューション推進室 上席研究員

崎村 夏彦 氏


客員研究員 楠 正憲 氏


一般社団法人 OpenIDファウンデーション・ジャパン

コミュニティ・リード 山中 進吾 氏

 大阪  9/17 16:40~17:40 : ROOM A & B
 東京  9/19 16:50~17:40 : ROOM A & B



Julian BondEchopraxia by Peter Watts [Technorati links]

September 15, 2014 07:27 PM
Tor Books (2014), Edition: First Edition, Hardcover, 384 pages
[from: Librarything]

Julian BondLock In by John Scalzi [Technorati links]

September 15, 2014 07:27 PM
Tor Books (2014), Edition: 1ST, Hardcover, 336 pages
[from: Librarything]

Ian YipHey security managers, go hire some marketing people for your team [Technorati links]

September 15, 2014 01:29 PM
This is not a plea for organisations to start actively hiring people away from vendor product marketing teams. But if you want to look for people to point the finger at and explain why you aren't getting the budget required to actually secure your environment, product marketing is a good place to start.

There were 2 key messages attendees should have taken away from the Gartner Security & Risk Management Summit in Sydney a few weeks ago:
  1. Security priorities tend to be set based on the threat du jour and audit findings.
  2. Security teams need to get better at marketing.
Here's the problem:
  1. Sensationalist headlines sell stories, which attracts more advertisers. This means the threat du jour will get the most airtime.
  2. People who hold the keys to budgets read headlines, which perpetuates the problem.
  3. Product marketing teams know this. So, to get more inbound traffic to their websites, the content creation and PR teams craft "stories" and "messages" around the threat du jour.
  4. Publications notice that vendor messages are in line with their stories, which fuels the hype.
It's like how seeing something on fire makes us think about checking whether our insurance covers fire damage. Meanwhile, the front gate's been broken for the past week but we've left it alone because no one's stolen anything from the house yet.

How can an internal marketing campaign driven by the security team help? You won't be able to stop the hype that builds up around the threat du jour. But as an internal team, you should know what the organisation you work for really cares about in business terms. Take audit findings as an example. While rather boring, translate audit findings into tangible, financial implications for the business and you suddenly have something worth talking about as an overall program instead of a checkbox to tick (which is unfortunately how a lot of internal security budgets get signed off).

As a starting point, take a look at my tongue-in-cheek post about contributed articles. While laced with sarcasm, the structure of my "meaningless contributed article" template works (because it's a structure many are subconsciously used to) if the content holds up. Ensure you have the following points covered:
The mistake many of us make is in thinking marketing is easy; it's not. And it takes good marketing to sell security internally. Crafting an article can help hone in on what really matters and justify budget allocation, which makes it easier to ignore the noise.

Great marketing focuses on what matters by simplifying the messages and communicating the value, be it emotional or financial. This is what most security teams do not know how to do, which is why budgets are not allocated to fix that lock on the front gate. Instead, budgets are spent on fire insurance.

I know this is ironic coming from me as I work for a security vendor. But if security teams hired marketers to communicate the things that matter to an organisation's security instead of the threat du jour, we as an industry will benefit from it.

As an aside, ever notice how many security companies have the word "fire" in their name?

CourionWait, There’s More in 8.4! [Technorati links]

September 15, 2014 12:50 PM

Access Risk Management Blog | Courion

Peter GeorgeIn a Tuesday August 26th press release and follow-on blog post, we shared a few details regarding how the latest version of the Access Assurance Suite leverages intelligence at the initial point of provisioning. This new capability ensures that you don’t inadvertently provide users with access that may lead to a governance violation. It complements the IAM suite’s existing use of intelligence to monitor users’ access and to automatically alert you or take action when a user’s access falls out of compliance. But wait, there’s even more in 8.4!

The latest version of the Access Assurance Suite alsAccess Your Wayo enables you to easily configure your identity and access management system to reflect how you intuitively think about your business. Now your users can search for access, approve requests, and certify access in your own familiar everyday language and with your own natural organizational structure. We call this Access Your Way.  This new access model can be used along with our suite’s existing tagging capabilities, improving your ability to categorize access for fast, intuitive user searches.

We’ve also leveraged User authentication via cellnew user interface technology to give the product a fresh new look and to extend support to a variety of additional browsers and devices. You can now use the Access Assurance Suite from an expanded range of Google Chrome, Internet Explorer and Mozilla Firefox browsers across PCs, tablets, and mobile phones. The Access Assurance Suite’s responsive new user interface automatically scales to different browser and device sizes.  There’s no longer a need to wait until you get to your desk to reset your password.  Just grab your Apple or Android cell phone or tablet and go.

Of course, intelligent provisioning, Access Your Way, and a great user experience are only part of what’s new. The 8.4 release includes dozens of other new capabilities ranging from expanded user dashboards to increased control over delegation to more sophisticated encryption and hashing algorithms to simplified self-service capabilities.improved dashboard 84

To learn more click here or call us at 866.COURION.

Vittorio Bertocci - MicrosoftMigrate a VS2013 Web Project From WIF to Katana [Technorati links]

September 15, 2014 07:59 AM

As you already know, VS2013 introduced a new ASP.NET project creation experience that closely integrates with Azure AD – allowing you to provision an entry for your application right at project creation time, without the need to visit the portal.

Projects created through that experience implement their identity functionality with Windows Identity Foundation. WIF is great- it is the technology that brought claims-based identity from its obscure origin to the preferred way of securing access to cloud and in general remote resources –  it is fully supported and it will remain supported in lockstep with the .NET Framework in which it lives. Now that we introduced a simpler claims identity programming model in Katana, however, it does feel a bit dated!
The VS tools did not really have any alternative to WIF – when the new ASP.NET project creation experience originally shipped, our support for WS-Federation and OpenId Connect in form of OWIN middleware wasn’t around yet. In fact, it reached GA less than one month ago!

As we move forward, you can expect the simpler model to replace the WIF codebase in the templates as well; but in the meanwhile, I know that a lot of you would like to make the jump already today and convert projects created and provisioned with VS2013 from WIF to OWIN. The good news is that as long as you are willing to do some light handiwork, it is quite easy to do.

In this post I’ll tell you about one quick & dirty way of doing that, meant to be used right after you created the project. If you developed the project further, this trick won’t help you: the migration is still possible, but it would require more work to identify and eliminate the moving parts needed by WIF but no longer required by the new OWIN middleware. Let me know in the comments if many of you are in that situation, and I’ll write a new post with all the details.

Step 1 – Create a Web App with VS2013

You have seen this tons of times already, but I’ll add the sequence again just in case.

Create a new ASP.NET project.


You’ll get the usual One ASP.NET dialog.


Choose MVC and click Change Authentication.


Choose Organizational Account, enter the domain of your Azure AD tenant you want to provision the app to, and click OK.


You’ll get the classic ADAL dialog that prompts you for your Azure AD credentials, so that VS can reach out to your directory and create the app entry on your behalf. Note: the dialog is so long because I have an awesomely big monitor Smile

Click OK again. VS will generate your project and will create an entry for it in your Azure AD tenant.

The Application’s Coordinates

The project creation logic generates a lot of moving parts that contribute to the identity functionality. We are going to ignore most of those, given that the OWIN middleware does not need them, and stay laser focused on the few info we do need.

We need to find the coordinates that were used to create the application’s entry in Azure AD, given that those values will be what we need to use when creating protocols request at authentication time – no matter which development stack we use.

We’ll find most of what we need in the web.config file. Open it, then find the appSettings element.

    <add key="webpages:Version" value="" />
    <add key="webpages:Enabled" value="false" />
    <add key="ClientValidationEnabled" value="true" />
    <add key="UnobtrusiveJavaScriptEnabled" value="true" />
    <add key="ida:FederationMetadataLocation" 
      value="" />
    <add key="ida:Realm" value="" />
    <add key="ida:AudienceUri" value="" />

The highlighted part is what we need.
Realm indicates the identifies assigned to the app for WS-Federation flows.
FederationMetadataLocation is the endpoint from which Azure AD publishes the tenant’s issuer coordinates. Technically, for this tutorial all you need is the tenant portion (in this case but in the general case (e.g. any WS-Federation provider, as opposed to just Azure AD) you would need the entire metadata address.

Save those values somewhere, I usally just whip them in a notepad window.

Note: Those handy appSettings values are inserted by the ASP.NET project configurator logic; however, here there’s a trick that will work with ANY web app using WIF regardless of how it was configured. If you scroll a bit further, you’ll find the <wsFederation> element.

 <wsFederation passiveRedirectEnabled="true"
   requireHttps="true" />

This is the element that is actually picked up by WIF at authentication time. The realm value is the same as the above. The issuer value indicates the endpoint of the STS you want to connect with; this is not as good as having the address of the metadata, but provides a great starting point given that you can usually get to it by goofing around with the base URL of the STS endpoint and append “FederationMetadata/2007-06/FederationMetadata.xml” to it to find the true metadata location.

The only other piece of info we need is the port number that IIS express assigned to our app for the SSL binding – that value has been used to provision the return URI in the Azure AD application entry, and unless we want to go to the portal and modify it manually we have to ensure we’ll use the same value. You can find it in the project properties, in the SSL URL field.


Paste that in the same notepad window, and close VS.

Step 2 – Get the Katana WS-Federation Sample and Configure it With Your App Coordinates

As I anticipated earlier, this is a quick and dirty trick. Instead of modifying the project created with the template, I will simply abandon it – and reuse its coordinates in a new project configured to use OWIN. This will take advantage of the application entry that was created for it in the Azure AD tenant.

We already have a project ready for you – it’s the WS-Federation sample we released with Katana. You can find it here.

From the GitHub page Choose Clone in desktop or download ZIP, whatever works best for you. Once you have the project locally, open it.

Head to the web.config file. You’ll find the appSettings right on top. Replace the value of ida:Wtrealm with the realm value you saved from the first app. Do the same with ida:Tenant and the tenant value you used in the original project.

  <add key="webpages:Version" value="" />
  <add key="webpages:Enabled" value="false" />
  <add key="ClientValidationEnabled" value="true" />
  <add key="UnobtrusiveJavaScriptEnabled" value="true" />
  <add key="ida:Wtrealm" value="" />
  <add key="ida:AADInstance" value="" />
  <add key="ida:Tenant" value="" />

Note: If you would be working with a provider other than Azure AD, you’d: a) skip the tenant part in the config b) go directly to App_Start/Startup.Auth.cs, scroll to the code that initializes the WsFederationAuthenticationOptions, and assign to the MetadataAddress property the full address of the metadata document you obtained earlier.

Done? Very good!

Now, the most delicate part. We need to tell VS that we want our project to start on the port that was assigned to the OTHER project (in this case https://localhost:44300/), instead of the one port the sample was configured to run on (https://localhost:44320/).

Right-click on the project in the solution explorer and choose properties. Click on the web tab on the left.

Find the Project URL field. Change the port shown there to match the one of the original project. Click Create Virtual Directory. You’ll get the following warning:


VS knows that there’s already a project mapped to that port, the original one, and just wants to make sure you’re OK to remap that port to the current project. That’s exactly what we want. Click OK. If everything goes well, you’ll get a confirmation.

That’s it! Shift+CTRL+S for saving everything, Shift+CTRL+B for building the project. Yes, I am a big fan of shortcuts.

It will take some time, given that on the very 1st build it needs to restore all NuGet packages, but it should be ready pretty soon.

Once it’s done, press F5.


One small difference from the original template is that this project is designed to offer an unauthenticated landing rather than imposing authentication for every resource. You can easily change this by moving the [Authorize] attribute around.

Hit Sign In.


So far so good… enter the creds of any user in the target Azure AD tenant.


Aaaaaand it’s done. Our OWIN-secured project successfully re-used the coordinates that the ASP.NET project creating experience originally provisioned in Azure AD.


When I bought my Surface Pro 2 I was super enthusiastic. Awesome screen, great stylus responsiveness. I loved it.
A couple of months ago the i7 version of the Surface Pro 3 came out, and I promptly grabbed one. The screen is absolutely incredible, it is feather light, super-fast… I use it all the time, including for writing this post.
And the Surface Pro 2? It’s no less great than when I got it, but I like the Pro 3 better… hence the Pro 2 is hitting eBay.

You know where I am going with this. WIF is still a great technology, super flexible, 100% supported… but now that Katana is out, I’d use it all the time if I could. I know this is the same for many of you, which is why I wrote this post. The support for the new model in the rest of our dev stack will come: in the meanwhile, if you have specific WIF->OWIN migration scenarios you’d like to get guidance for, feel free to drop me a line or hit me on twitter –  I’ll do my best to help Smile

September 14, 2014

Anil JohnThe Value of Sameness in a World Demanding Identity [Technorati links]

September 14, 2014 04:00 PM

Two terms often used interchangeably in our community are Credential and Token. But they are not the same since what each is able to assert online is not the same. This blog post delves a bit deeper into this topic and the value provided by a Token to both individuals and relying parties.

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.

These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer.

Julian BondIt's late 2014 and there are cultural critics still arguing that creativity stopped in 2003, using examples... [Technorati links]

September 14, 2014 07:12 AM
September 13, 2014

Julian BondA while ago I went looking for cocktails named after places and boroughs in New York. There's really... [Technorati links]

September 13, 2014 09:27 AM
A while ago I went looking for cocktails named after places and boroughs in New York. There's really quite a lot of them from the well known Manhatten to the Harlem Mugger. That led to trying to do the same thing for London but that proved much harder. Scanning the whole of the Savoy Cocktail book only turned up the Piccadilly and the Mayfair.

This week's cocktail is the Piccadilly.
Gin (Portobello), 25ml French Vermouth (Noilly Prat), Dash Grenadine (Monin), Dash Absinthe (No Absinthe so I used Bourdin Pastis), Shaken, Martini Glass. The end result is pink so I added one fluorescent Opies maraschino cherry as a garnish. I'm not really convinced, especially as the Pastis overpowered the other flavours. The Grenadine is really just colour. I think there might be a drink in there though where you make a wet martini (say 4:1) add a dash of grenadine for colour, stir so it doesn't dilute so much and then do a absinthe/pastis wash of the glass the way you would making a Sazerac. So there's just the smell of the Anise without so much of the flavour.
 Piccadilly Cocktail | Savoy Stomp »
Piccadilly Cocktail. 1 Dash Absinthe. (Verte de Fougerolles) 1 Dash Grenadine. (Homemade) 1/3 French Vermouth. (3/4 oz Dolin Blanc Vermouth) 2/3 Dry Gin. (1 1/2 oz Beefeater 24). Shake (I stirred) well and strain into a cocktail glass. Like the Phoebe Snow, there's nothing particularly deep ...

[from: Google+ Posts]
September 12, 2014

MythicsThe Features and Benefits of Oracle Coherence [Technorati links]

September 12, 2014 08:31 PM

Coherence has many meanings across different domains, including physics, mathematics, or computer science. Coherence within computer science may refer to a feature within Parallels Desktop for…

KatasoftEasy API Key Management for Node - A Sample App [Technorati links]

September 12, 2014 03:00 PM

When you build a REST API, creating the infrastructure to generate and manage API keys, OAuth tokens, and scopes can be tedious, risky and time-consuming. Fortunately, Stormpath just added API key management to our express-stormpath module. Now, API and webapp developers using express.js can generate and manage API Keys and OAuth tokens as well as lookup and secure developer accounts – all without custom OAuth code.

What We’re Building

This post will walk you through an express application that lets you:

The API I built is a simple one: it returns the weather for a requested city in Fahrenheit.

Before We Start…

  1. Follow along with the app online
  2. When you have the application running in a browser, open the “Web Inspector” and navigate to the Network tab to see some important things happening:
    • When making any REST calls on generating OAuth tokens, click on the request made to the /weather endpoint and look at the Request Headers –> Authorization.
    • When generating an OAuth Token, click on the oauth request and scroll down to the section titled “Form Data” to see the grant_type and the scope of the request.
  3. Note the directory structure of the github repository for this project. All server-side logic (which this post focuses on) will be in /server.js.

Let’s get started.


User registration and login are built into express-stormpath, and a good place to start with any app. You can see the login screens on the live app.

Default Login Screen Stormpath Express API Management

Let’s take a look at the code that makes this happen. We first import the necessary packages (all code in this post goes into the root node file, in my case /server.js):

var express = require('express');
var server = express();
var stormpath = require('express-stormpath');

Next, we set up the server to use Stormpath:

server.use(stormpath.init(server, {
  application: process.env['STORMPATH_APP_HREF'],
  secretKey: process.env['STORMPATH_SECRET_KEY'],
  redirectUrl: '/dashboard',
  getOauthTokenUrl: '/oauth',
  oauthTTL: 3600

The application field points Stormpath to the right application for the project. Users will be pointed to the redirectUrl after logging in or creating an account. As for the Oauth fields, we will get back to those a little later. It is important to avoid storing your API Key, Application Href, and Secret Keys (used for user sessions) in plain text in your code. Instead, export these as environment variables and access them in server.js by doing process.env['name_of_env_var']. Stormpath-express is capable of reading the environment variables itself, so exporting them to your system is enough; however, above I show how to manually set the stormpath-express variables in code.

Once a user goes to our site (the root of the site or ‘/’), we need to redirect them to the custom login page provided by stormpath-express, which by default lives at /login.

server.get('/', function(req, res) {
  res.redirect(302, "/login");

Now that we have login and account creation out of the way, let’s use API Keys to protect our weather API.

API Key Generation

First, we need to give the user an API key to use. This way, any REST endpoint is protected and accessible only to users who possess a valid API Key and Secret. Once a user logs in or creates an account, they will go directly to the application dashboard, where an API Key is automatically generated and displayed.

Dashboard with API Key

Let’s see what the code looks like:

server.get('/dashboard', stormpath.loginRequired, function(req, res) {

  res.locals.user.getApiKeys(function(err, collectionResult) {
    if(collectionResult.items.length == 0) {
      res.locals.user.createApiKey(function(err, apiKey) {
        res.locals.apiKeyId =;
        res.locals.apiKeySecret = apiKey.secret;
        res.locals.username = res.locals.user.username;
    else {
      collectionResult.each(function(apiKey) {
        res.locals.apiKeyId =;
        res.locals.apiKeySecret = apiKey.secret;
        res.locals.username = res.locals.user.username;

By calling res.locals.user.getApiKeys we ask Stormpath to return a collection of an account’s API Keys. In the if statement, we check if the account has any API Keys. If not, Stormpath generates one and returns it to the client. In the else statement, where an API Key has already been generated, Stormpath returns the first API Key available.

Making a REST Call With Basic Authentication

Now the user is logged in and has access using the API Key Id and Secret. Let’s have them make an API call.

Basic Authentication Dashboard

In this sample application, the REST endpoint returns a floating point number, representing the weather in the requested city. For example if a GET request is made to /weather/London, a floating point number with one digit after the decimal is returned to the client representing the weather in London. Only one endpoint is available in the form of weather/, where city can be any one of the four cities provided by the radio buttons.

First, the client has to Base64 encode the key:secret pair and send this to the server as the authorization header. In angular.js, the HTTP request would look something like this:

$http({method: "GET", url: '/weather/' + $, headers: {'Authorization': 'Basic ' + sharedProperties.getEncodedAuth()}}).success(function(data, status, headers, config) {
    $scope.temp = data + ' F';
    $scope.myCity = $;
  .error(function(data, status, headers, config) {

In this HTTP request, we specify the desired city in the url: /weather/ and add our Base64 encoded API key as the authorization header.

Now lets take a look at what happens on the server side, where this request gets processed:

server.get('/weather/:city', stormpath.apiAuthenticationRequired, function(req, res) {

if(req.headers.authorization.indexOf('Basic') === 0) {
else {

function getWeather() {
    console.log("Getting weather for " +;
    var url = "" +;
    var data = "";

    http.get(url, function(myRes) {
        myRes.on('data', function(chunk) {
            data += chunk;
        myRes.on('end', function() {
    }).on('error', function() {
        console.log("Error getting data.");

function callback(finalData) {
    var json = JSON.parse(finalData);        
    //convert to Farenheight
    var farenheight = Math.round((((parseFloat(json.main.temp) - 273.15) 
    * 1.8) + 32) * 10)/10;


First, notice the stormpath.apiAuthenticationRequired call that precedes the callback function of our route. This function verifies that the authorization credentials sent over are legitimate. If they are not, the server will return a 401 Unauthorized error. Assuming the credentials are correct, the server is then allowed to return the weather of the desired city.

Here is how the same request can be made with CURL:

curl --user "[API_KEY]:[API_SECRET]" http://localhost:8080/weather/London

If authentication is successful you will get back a floating point number representing the weather in London. If it is not, you will see: {“error”:“Invalid API credentials.”}.

Generating an OAuth Token with Scope

Basic Authentication is acceptable for a few use cases, but we strongly recommend you use OAuth if security is important to your API. By using OAuth, making requests to protected endpoints does not expose the API Key Id and Secret. It also gives a developer the ability to only give access to certain scopes of the endpoint the user is trying to access. This is compared to authenticating with API keys, which gives access to the entire endpoint.

Generate OAuth Token with Scope

By checking the desired cities and clicking Get Oauth, the user gets a token which can now be used to target the REST endpoint. What exactly happened on the server side to generate this Oauth Token? Let’s look at the server setup code one more time:

server.use(stormpath.init(server, {
  application: process.env['STORMPATH_APP_HREF'],
  secretKey: process.env['STORMPATH_SECRET_KEY'],
  redirectUrl: '/dashboard',
  getOauthTokenUrl: '/oauth',
  oauthTTL: 3600

The getOauthTokenUrl is set to /oauth, which means that a POST request sent to that URL will check for API Key credentials and return a Token that is valid for 1 hour (by default). This POST request also needs to have a form parameter “grant_type” with the value set as the requested scope. Here is the request in angular.js:

myData = $.param({grant_type: "client_credentials", scope: scopeData});
    $http({method: "POST", url: '/oauth',
        headers: {'Authorization': 'Basic ' + sharedProperties.getEncodedAuth(), 'Content-Type': 'application/x-www-form-urlencoded'},
        data : myData})

        .success(function(data, status, headers, config) {
            var oauthToken = data.access_token;
            $scope.oauthToken = oauthToken;
        error(function(data, status, headers, config) {

Making a REST Call Using the OAuth Token

In order to hit the REST endpoint using Oauth, we must send our token to the weather/ endpoint using Bearer authentication:

$http({method: "GET", url: '/weather/' + $, headers: {'Authorization': 'Bearer ' + sharedProperties.getOauthToken()}}).success(function(data, status, headers, config) {
        $scope.temp = data + ' F';
        $scope.myCity = $;
        .error(function(data, status, headers, config) {
            $window.alert("Permission Denied!");

Now, on the server side we can add the logic for Bearer authentication, and parse our requested scopes.

server.get('/weather/:city', stormpath.apiAuthenticationRequired, function(req, res) {

if(req.headers.authorization.indexOf('Basic') === 0) {

else if(req.headers.authorization.indexOf('Bearer') === 0) {
    var requestedCity =\s+/g, '');
    if(res.locals.permissions.indexOf(requestedCity) >= 0){
    else {
else {

function getWeather() {
    console.log("Getting weather for " +;
    var url = "" +;
    var data = "";

    http.get(url, function(myRes) {
        myRes.on('data', function(chunk) {
            data += chunk;
        myRes.on('end', function() {
    }).on('error', function() {
        console.log("Error getting data.");

function callback(finalData) {
    var json = JSON.parse(finalData);

    //convert to Farenheight
    var farenheight = Math.round((((parseFloat(json.main.temp) - 273.15) 
    * 1.8) + 32) * 10)/10;

The requested scopes live inside the res.locals.permissions object and we can search it to see if the city we want the weather for is permitted for us. If so, the server will proceed to return the weather; if not, a 403 is returned. Compared to a 401 error, which is stands for an unauthorized request, a 403 represents a forbidden request.

London was part of the scopes in the Oauth Token so getting its weather is no problem:

Set Scope for OAuth Token Example

Berlin on the other hand was not, so the weather is not given and an error is returned instead:

API scope permission denied


Node.js and the stormpath-express package make it easy to generate and manage API Key-based authentication in your webapp or API. If you’d like to see more code and even run this application yourself, check out the source code and let us know what you think!

Axel NennkerDeviceAutoLogin [Technorati links]

September 12, 2014 12:04 PM
Maybe you are an Android user and wondered how sometimes the browser logs you in without asking for a password?

Well, I wondered but never found the time to investigate.

Thanks to the awesome W3C Web Cryptography Next Steps Workshop and thanks to the usual jet-lag I found that time now. First I thought that this is Google-ism "Chrome does some questionable proprietary trick and knows just how to login to Google accounts". That is half-true.

There is chatter on the chromium list but I seems that the Android browser knows this trick since 2011 and Chrome for Android was released in 2012.

So how does it work?
  1. a site responds with a special HTTP header "X-Auto-Login" 
  2. the browser sees that header 
  3. the browser asks the device's account system for local accounts for the realm parameter of the header (e.g. 
  4. the browser asks for a special kind of token from that account 
  5. the browser asks the user for consent to login 
  6. the token is an URL - so if the user consents the browser opens that URL 
  7. the site the URL points to accepts the token 
  8. the site redirects the browser to the original page the user wants to use


I think this is neat. But why doesn't Google talk about it? Why isn't this standardized at W3C?
Anyway. How can you benefit?
As a user? You already do.
As a website with your own mobile app?
  1. Well, Google is probably not issuing tokens for your site. Maybe they do or would do because they want to be an identity provider?... 
  2. Issue the tokens yourself.
  1.  What you need on the Android device is an AccountAuthenticator.  
  2. let your website issue the X-Auto-Login HTTP header "realm=com.yourdomain&args=..." 
  3. let your Account Authenticator from step a generate tokens based on 'String authTokenType="weblogin:" + args;' 
  4. let your site accept the tokens generated by your Account Authenticator

I think this is a good idea. If your company has an mobile app then build that Account Authenticator. This is even more true if your company has several mobile apps. (Put the authenticator in your own CompanyServices.apk (like Google does with the GooglePlayServices) so you can update independently from your apps.)

You might know that I work for a 100% subsidiary of Deutsche Telekom. Why isn't DT doing this? Don't ask me. I am telling them for years that our own AccountAuthenticator would be "gold". But who listens to me. Working for a big company has its challenges.

Back to wondering... How can we get this or something similar standardized through W3C?

Maybe I should write a blog post to make it more known. But then who reads this blog anyway. ;-)

Thanks for listening.

Nat SakimuraInstagramなどのAndroidアプリにプライバシー上の欠陥、研究者が指摘 [Technorati links]

September 12, 2014 02:11 AM


CIOマガジン[1]とPCWorldが報じたところによると[2] InstagramやViberなどのAndroidアプリにプライバシー上の欠陥があることが見つかりました。今回報告したのは、ニュー・ヘイブン大学のフォレンジック研究センター(UNHcFREG)です。それによると、画像をアクセス制御無しでサーバに保存していたりなど、かなり残念なことになっています。

また、これらのアプリの問題として、セキュリティの問題を報告しようとしても、開発者に連絡がつかないというのもあるようです。てか、これもずっと知られていたことですけどね。@nov とか、多くのメジャーアプリで苦労してましたから。





September 11, 2014

Kuppinger Cole5 Steps to Protect Your Data from Internal & External Threats [Technorati links]

September 11, 2014 06:19 PM
In KuppingerCole Podcasts

Most organizations have already been hacked or been victims of data theft (internal or external), whether they know it or not – or know it and haven’t been willing to acknowledge it. Many are operating in specific regulatory environments, but aren’t in full compliance, leaving them vulnerable to lawsuits or even criminal prosecution.

Watch online

GluuRoadmap for Higher Education Institutions: Will New Identity Standards Achieve the Promise of Federated Identity? [Technorati links]

September 11, 2014 05:46 PM

“Market Strength” as defined by the number of applications that will support the protocol.

Will New Identity Standards Achieve the Promise of Federated Identity in Higher Education?

OAuth2 based identity standards bridge web and mobile security requirements and have critical developer and industry support.

See Also: Gluu Protocol Predictions

It is harder than you think to identify a person online. Verizon estimates that 80% of security breaches in 2013 were a result of a failure to do so correctly. Since the early 2000s, several standards have risen and fallen that defined a mechanism to identify a person using the Web. Some of these standards were presented as the panacea to security, but failed to realize any adoption. Some achieved moderate adoption, but never achieved the ubiquity of Internet standards like DNS or SMTP–where every domain on the Internet is expected to maintain a service.

No Web authentication standard has achieved even the level of adoption of LDAP. Standards in identity and security are crucial because the applications used by higher education institutions are heterogeneous: commercial, SaaS, open source and home grown applications all must share the same security infrastructure. When a person authenticates, they won’t notice the difference between one standard or another. But the institution’s IT staff is expected to understand the standards in enough detail to know which organizational cryptographic keys need to be protected and managed.

“SAML”–the Security Assertion Markup Language–is one such standard that has seen moderate adoption. But despite a concerted effort to evangelize SAML by Educause, Internet2 and other information technology leaders in education, thousands of campuses have not adopted it. Today a person is more likely to use their Facebook credentials to access a web or mobile resource than to use their university SAML credentials. The most successful SAML application has probably been Google Mail. Many campuses are using SAML to enable students to check their mail without having to store passwords on Google. But the number of SAML websites the average campus enables is usually pretty small, around a dozen is not uncommon.

The introduction of the iPhone in 2007 changed the requirements for online authentication, and pulled the rug out from under some of SAML’s core assumptions. The Web browser is not the only conduit for services. We use our mobile phones, tablets, and other devices to access a wide array of our online stuff. Largely completed by 2005, SAML was not designed to accommodate many of the patterns now commonly in use, like when a mobile application calls a backend API.

The big consumer IDPs like Google, Microsoft and Facebook figured how to get a significant number of people and websites to adopt their standard for Web authentication. How did they do it? They did a great job of listening to developers, and designed an authentication API that suited their preferences. And then the developers created great content. The advances made by Google and other consumer IDPs will greatly benefit higher education institutions.

For early adopters of SAML, the good news is that identity and trust management does not change with the introduction of a new identity federation API. Applications architected for SAML can be upgraded to support newer authentication APIs usually by changing a small amount of code, or hopefully by using a different plugin.

Likewise, multi-party federations like InCommon could also profile and support new protocols. Gluu has proposed the development of a new standard for JSON multi-party federation metadata, and standardizing OAuth2 federation endpoints. New protocols may also require the updates of definitions in documents like federation Participation Agreements, for example, Gluu publishes a sample agreement that defines terms like “Client Claims” and “OpenID Provider.”

Identity will continue to be a core enabler for higher education institutions. By affiliating with an institution, the person will gain access to resources–physical plant and network resources. In fact, instead of moving toward outsourcing identity to a central silo like Google, new standards will enable wide scale decentralization. We don’t want to require a Google account. But if institutions publish the same API as consumer IDPs, web site developers won’t have to implement any extra code to support an institution’s security infrastructure.

It is important to avoid adopting an aversion to “not-invented-here.” Innovation comes from unexpected places. By aligning with consumer standards for Identity, people will be able to use their university credentials to access even more content. The pace of innovation has not slowed down. The “Internet of Things” is creating even more requirements for inter-operable security. New standards will make this possible. And most likely, they will extend the OAuth2 standards originated in the consumer sector.

Kantara InitiativeKantara Initiative Awards CSP Trustmark Grant at ALs 1, 2 & 3 [Technorati links]

September 11, 2014 03:00 PM is positioned to issue high assurance Digital Identity Credentials across the public and private sectors

PISCATAWAY, NJ–(11 September, 2014) – Kantara Initiative announces the Grant of Kantara Initiative Service Approval Trustmark to the Credential Service Provider (CSP) service operating at Level of Assurance 1, 2 and 3. was assessed against the Identity Assurance Framework – Service Assessment Criteria (IAF-SAC) as well as the Identity Credential Access Management (ICAM) Additional Criteria by Kantara Accredited Assessor Electrosoft.

A global organization, Kantara Initiative Accredits Assessors, Approves Credential and Component Service Providers (CSPs) at Levels of Assurance 1, 2 and 3 to issue and manage trusted credentials for ICAM and industry Trust Framework ecosystems. The broad and unique cross section of industry and multi-jurisdictional stakeholders, within the Kantara Membership, have the opportunity to develop new Trust Frameworks as well as to create profiles of the the core Identity Assurance Framework for applicability to their communities of trust.

The key benefits of participation Kantara Initiative Trust Framework Program participation include: rapid on boarding of partners and customers, interoperability of technical and policy deployments, an enhanced user experience, competition and collaboration with industry peers. The Kantara Initiative Trust Framework Program drives toward modular, agile, portable, and scalable assurance to connect business, governments, customers, and citizens. Join Kantara Initiative now to participate in the leading edge of trusted innovation development.

“Consumers are tired of having to create yet another username and password and to attempt to prove who they are to each government agency when they access government services online.” said Matthew Thompson, COO. “ enables consumers to more rapidly login to government services with a trusted credential issued by or a participating financial institution so that consumers can leverage the trust they have already established in their online identity across a network. We are excited to be able to enhance the delivery of government services to the standard consumers expect in the 21st century.”

Thompson adds, “For a Bring Your Own ID model to work, credential providers like should be formally certified. A recent report by the Ponemon Institute revealed that 80% of the more than 3,500 IT security and business professionals surveyed worldwide agreed that formal certification of their BYOID identity provider was important with 29% of those surveyed saying it was essential.”

“We congratulate on their grant of the Kantara Trustmark at Assurance Level 1, 2 and 3. We look forward to their continued leadership in the Identity Management space and as Members of the Kantara Initiative network of experts,” said Joni Brennan, Executive Director, Kantara Initiative.

For further information or to accelerate your business by becoming Kantara Accredited or Approved contact

About Kantara Initiative: Kantara Initiative connects business as an agile industry and community business acceleration organization that enables trusted transactions through our innovations, compliance programs, requirements development, and information sharing among communities including: industry, research & education, government agencies and international stakeholders. Join. Innovate. Trust.

About is the first digital identity network that allows consumers to prove who they are online while controlling how their information is shared with brands. For participating organizations, acts as a trusted intermediary, capable of verifying consumer identity and group affiliations in real-time. does not sell, rent, or loan member information to a third party for any reason. members retain complete control over how, or if, their information is shared on a case-by-case basis. This allows brands to ensure a consistent customer experience across offline and online channels while reducing costs associated with manual verification. was a finalist for WSJ Startup of the Year, named one of Entrepreneur Magazine’s “100 Brilliant Companies,” and was selected as one of five companies to participate in the President’s National Strategy for Trusted Identities in Cyberspace (NSTC). For more information, please visit


Nat Sakimura500万件にも及ぶGmailのユーザー名とパスワードが流出?! [Technorati links]

September 11, 2014 02:28 PM





まぁ、ちょっと待て。もしGoogleからだとすると、500万件は少なすぎる。10億ユーザも居るのよ。もしデータベースにアクセスできたのだとしたら、0.5%しかとって行かないというのはあり得ない。さらに、言語特定があるのも怪しさ満点。おそらく、フィッシングによるものだろう。と、思って元記事であるTheDailyDot [2] を見に行ったら、案の定Googleの談話として、多くは既に存在しないか停止されている非常に古いアカウントのもので、おそらくフィッシングされたものという旨のことが書いてあった。そこから、更に元記事のロシア語のフォーラム[3] を見に行くと、「状況から見てフィッシングされたもの」との記載有り。なんだ、最初は冷静じゃん。それが、再配信されるにしたがって、過激になっていったのね。


なお、 で漏れたかどうかテストできると記事中にあるが、 がまともなところであることを確認してからテストしたほうが良いと思いますよ、老婆心ながら。ちなみに、Googleの公式ブログによると、有効なメアド・パスワードの組合せは2%に満たなかったそうだ[4]。

[1]MACお宝鑑定団のBlog:『500万件にも及ぶGmailのユーザー名とパスワードが流出』 (2014/9/11取得)

[2] TheDailyDot:”5 million Gmail passwords leaked to Russian Bitcoin forum”, (2014/9/11 取得)

[3] А теперь и в сеть выложена база на 5 000 000 адресов


[*] なお、ちなみに日経BPの記事は流石にちゃんとしてた。

Nat Sakimura想像力の無い日本:『「全盲なら乗るなよ」「相当イラつくのは確か」川越線での全盲女子負傷 加害者への同調がツイッターで続出』 [Technorati links]

September 11, 2014 01:24 PM

engelleri kaldır remove barriers   YouTubeまったくひどい話だ。






[1] 『全盲女子生徒:足蹴られケガ つえで転倒の腹いせか 川越』 毎日新聞 (2014年09月09日 20時49分)

[2] 「全盲なら乗るなよ」「相当イラつくのは確か」川越線での全盲女子負傷 加害者への同調がツイッターで続出 (2014/9/11取得)

[3] 外務省:『日本と国際社会の平和と安定に向けた取組』 (2014/9/11取得)

[4] 『美しくない日本、炸裂』

[5] 実際、私の友人の妊婦も突き飛ばされたりなどしているらしく、電車は怖いと言っている。

[6] Facebookで、N先生に教えていただきました。

Nat SakimuraGoogleの第一回「忘れられる権利」の公開討論会終了 [Technorati links]

September 11, 2014 12:19 PM

全7回行われる予定のGoogleのAdvisory Council主催の「忘れられる権利」公開討論会の第一回が、去る9日、マドリッドで行われた。ロイターによると[1]、スペインから8人の専門家が出席して討論会が行われ、同国のプライバシー専門家協会のトップなど数人は、大衆の情報アクセスに影響を及ぼす判断をグーグルのような民間企業に委ねることの是非を質問したとのこと。

GoogleのAdvisory Councilは、シュミット会長とドラモンド最高法務責任者に、

  1. Luciano Floridi オックスフォード大学教授(情報哲学で有名)
  2. Silvie Kaufmann 仏ル・モンド紙 編集主幹
  3. Lidia Kolucka-Zuk 元ポーランド大統領補佐官
  4. Frank La Rue 国連高等弁務官事務所・言論の自由の推進に関する特別ラポータ
  5. José-Luis Piñar 元29条委員会副委員長
  6. Sabine Leutheusser-Schnarrenberger 元ドイツ法務大臣
  7. Peggy Valcke KUルーベン大学教授
  8. Jimmy Wales Wikimedia財団創始者


今回の一連の討論会に関しては、EU当局は歓迎しているものの、フランスの当局であるCNILの長官であるIsabelle Falque-Pierrotin氏などは、参加者がオープンでなく、Googleが決めるために、それを通じて議論を誘導しうるとして批判的だ。





Advisory Council – Google Advisory Council

Kuppinger Cole24.09.2014: Intelligent Identity Management in the Cloud – a use case [Technorati links]

September 11, 2014 07:28 AM
In KuppingerCole

Most organisations fail to plan identity management in the Cloud. They adopt a variety of software-as-a-service solutions each requiring its own identity repository with a periodic synchronisation that fails to provide sufficient governance over de-provisioned accounts. This webinar looks at the issues with managing identities in the Cloud and one potential solution.
September 10, 2014

GluuOAuth2 Chipset… the answer to IOT Security? [Technorati links]

September 10, 2014 09:56 PM


If you have been following the Gluu Twitter feed, you’ve probably noticed a lot of articles posted recently about Internet of Things (“IOT”) security (or lack thereof).

If you bother to read any of these articles, you will discover that none of them provide any answers as to how a mobile application can share user data while calling APIs, or how the API server can determine if a Request for an API by a certain person, using a certain client should be honored. Its a weird situation where the people (and even some of the journalists) know that the emperor has no clothes, but the API developers and IOT experts are going about business as usual.

Even though it would make sense to build in security from the ground up, the focus of IOT hardware vendors has been on connectivity and shipping fast. And why not? As long as IOT devices sell, the fact that they might have some terrible security flaw that requires replacement next year is just an extra bonus.

Leveraging existing security standards for IOT has challenges. For example, IOT devices are more resource constrained than phones–they have slower CPUs and less memory. They are disconnected from the Internet more often. Some devices might not ever connect to the Internet, although they may connect to a local network. Some devices might not even have IP: they may connect only via Bluetooth or some other wireless network protocol.

Let’s take a simple example. You have a tablet, and you want to use it to choose a Netflix movie on your TV, pre-heat your oven for the brownies, and tell your robot-butler to take out the ice-cream. Luckily, your oven, TV and robot-butler have APIs. But how will they know its you who made this request (maybe your kids don’t have ice cream permission…)? And how will they know to trust your tablet, which communicates on your behalf?

The answer to IOT security is to not re-invent 15 years of access management experience. The patterns and protocols that are now available to protect Web resources should be carried over to IOT. This would provide a solid foundation for incremental enhancements in security.

I think that security should be built in at the chipset level. The two most promising APIs for IOT security are OpenID Connect and UMA. OpenID Connect defines person identification, client registration, and client authentication. UMA defines the policy enforcement and decision points. These profiles of OAuth2, OpenID Connect and UMA, provide open standards for authentication and authorization respectively.

When people think about security, they tend to focus on all the bad stuff that can happen without security. Many wonder, “When will there be a 9/11 security event that forces user behavior to change?” I think this is the wrong way to look at it. We need security because it would enable us to lead richer, more productive lives. In other words, the opportunity cost of not having security far exceeds the costs of breaches. What could we do if we had security?

Maybe then your robot could take out the ice cream…

Mike Jones - MicrosoftGeneral Availability of Microsoft OpenID Connect Identity Provider [Technorati links]

September 10, 2014 07:06 PM

Microsoft has announced that the Azure Active Directory OpenID Connect Identity Provider has reached general availability. Read about it in Alex Simons’ release announcement. The OpenID Provider supports discovery of the provider configuration information as well as session management (logout). The team participated in public OpenID Connect interop testing prior to the release. Thanks to all of you who performed interop testing with us.

Mike Jones - MicrosoftMicrosoft JWT and OpenID Connect RP libraries updated [Technorati links]

September 10, 2014 06:55 PM

This morning Microsoft released updated versions of its JSON Web Token (JWT) library and its OpenID Connect RP library as part of today’s Katana project release. See the Microsoft.Owin.Security.Jwt and Microsoft.Owin.Security.OpenIdConnect packages in the Katana project’s package list. These are .NET 4.5 code under an Apache 2.0 license.

For more background on Katana, you can see this post on Katana design principles and this post on using claims in Web applications. For more on the JWT code, see this post on the previous JWT handler release.

Thanks to Brian Campbell of Ping Identity for performing OpenID Connect interop testing with us prior to the release.

Kuppinger Cole28.01. - 30.01.2015: Digital Risk & Security Summit 2015 [Technorati links]

September 10, 2014 12:26 PM
In KuppingerCole

The Digital Risk and Security Conference 2015, taking place January 28–30th 2015 at the Intercontinental Hotel, Shenzhen, China, is where you will learn the latest in identity management, cloud technology and compliance requirements in the software industry. Thought leaders and experts in information security get together to discuss and shape the future of secure, privacy-aware agile, business- and innovation driven IT. DRS provides a world class list of speakers, panel discussions, thought...

Julian Bond [Technorati links]

September 10, 2014 06:51 AM

The Good, The Bad and The Ugly? For a Fistful of Dollars, Los Trios Paranoias ride forth. But independently, on separate trains.

Meanwhile Thatcher-Lite in the form of "Grinner" Brown tries to throw the electorate a bone.

Less than 10 days to go, postal votes are in, and now they're trying to make up Devo-Max on the fly because they hadn't planned contingencies for something they've known about for 2 years.

You couldn't make it up, could you. Are they trying to fail?

Bonus Link:
 UK leaders campaigning to save Union »

[from: Google+ Posts]
September 09, 2014

Julian BondSo farewell then, iPod Classic. [Technorati links]

September 09, 2014 09:45 PM
So farewell then, iPod Classic.

Well that sucks. If Apple won't build and sell the 1TB iPod Classic I want, then maybe now somebody else will? I seem to have been saying this for quite some time. There's a small but genuine market for a high capacity, high quality personal media player. It's just not a mass market.

I could rush out and buy one of the last remaining 240GB Classics on eBay available from people that mod and upgrade the official 160. But unfortunately the firmware doesn't really support the extra space. And the extra space still isn't big enough.

So what alternatives are there for those of us with too much music in our collections? And no, storing it in the cloud and accessing it via streaming on a smartphone is not an option.
 The iPod Classic Is Dead | TechCrunch »
My friends, I come to you with sad news. Our old friend, the iPod classic, is no more. Apple unceremoniously removed the much-beloved media player from its..

[from: Google+ Posts]

Julian BondGlenn Greenwald nails it again. [Technorati links]

September 09, 2014 03:18 PM

Nat SakimuraGoogleが「忘れられる権利」の公開討論会を欧州で7回開催 [Technorati links]

September 09, 2014 12:19 AM


そこで、Googleは、この「忘れられる権利」と「知る権利」のバランスをどう取っていくかということの討論会を行い、検討してきたいというわけだ。討論会は、Googleが設置したAdvisory Councilによって運営され、議長は同Councilから出る。同Councilは、Wikimedia財団の創始者のJimmy Walesや、かつてプライバシー当局に勤めていたり、プライバシー関係の裁判に関わった裁判官によって構成されている。

9月15日から始まる、「忘れられる権利」を各検索エンジンにどのように整合性をもって適用していくかということに関するEU各国のデータ保護当局の会合の直前に始まるこの討論会を、EU当局はこの動きを歓迎しているとのこと。一方、フランスの当局であるCNILの長官であるIsabelle Falque-Pierrotinは、批判的で、ロイターの取材[3]に対して、「彼らはオープンで倫理的であると見られたいのでしょう。しかし、CouncilのメンバーはGoogleによって選任され、だれが聴衆として参加できるか、どのような結論が出るかは彼らが決めるのです」と述べている。


Advisory Council – Google Advisory Council

September 08, 2014

Paul TrevithickDignity 0.1 [Technorati links]

September 08, 2014 09:14 PM

Below is a version 0.1 ecosystem map of providers of four things: source code, communities, technical specifications and products that you can use right now. All are about promoting dignity–”the state or quality of being worthy of honor or respect”–of individuals. In place of dignity I could have used other words like autonomy, independence, re-decentralization, personal sovereignty, etc. Anything to do with today’s status quo web of centralized, websites and services is out of scope. My personal bias towards user-managed identity and personal data ownership/control is also noted.


Some terminology:

Hyperlinked version of the above (even cooler!)

Ben Laurie - Apache / The BunkerSmoked Duck Breasts [Technorati links]

September 08, 2014 06:55 PM

I’ve recently started experimenting with smoking. First experiment was lamb bacon, but that’s going to take some refining – it was good, but I’m sure it could be better. Recipe once refined.

Todays’ was smoked duck breasts.

Marinade the duck breasts for 2 days in red wine, sugar, salt, pepper and chinese five spice. Smoke (I use a ProQ Frontier) with apple wood (half a smoking box full) and lapsang souchong tea (contents of four teabags) at 125C for 3-4 hours.

KatasoftThe Problem with API Authentication in Express [Technorati links]

September 08, 2014 03:00 PM

Express has become a popular tool for building REST APIs, which rarely need features that most web frameworks ship with: session and cookie support, templating, etc. Since Express comes with none of these, you can to quickly compose API services without navigating around (or needing to disable) core functionality.

At the same time, API authentication is an increasingly critical part of a developer’s stack. As more developers build APIs they need to provision API keys, audit API requests, and securely authenticate / authorize users against the API.

The problem: There aren’t enough good API authentication tools out there for Express, which leads to security holes and a lot of headaches. This post will walk through some of the challenges – and solutions – in managing your API access in Express.

REST API Security Explained

Before diving into the ecosystem options and challenges that currently exist, I’d like to take a moment to cover how REST API authentication should ideally work. There are exceptions to this rule, of course, but in general, the following hold true for most public (and private) API services.

There are two common ways to secure your REST API service: either via HTTP Basic Authentication or OAuth 2.0 with Bearer Tokens.

If your service is going to transmit sensitive information, it’s best to serve it over HTTPS to ensure that data can’t be leaked in transit.

API Keys

At the core of any API service are API keys. API keys allow developers to authenticate against an API service.

But what should API keys look like, ideally? What is the ideal way to generate API keys?

Well, before answering that question, lets take a look at one way you don’t want to do it.

How Not to Generate API Keys

How many of you have used an API service that generates a single API key for you?

For instance, many times I’ll sign up for an API service and get a generated API key that looks something like this: 6myyAgKSZuLaextotmfQiRPdLkc79ycjgqhqKD51.

Unfortunately, if you’re only receiving a single API key from a provider, chances are, this provider isn’t properly securing their REST API.

All API keys should really be API key pairs.

The way HTTP Basic Authentication works is that it allows you to specify two pieces of information with each request: a ‘username’ and a ‘password’.

When you submit an API request to a service secured with HTTP Basic Authentication, what you’re really doing is taking your username and password (API key pair), smashing them together into a string (separted by a colon character), then base 64 encoding the result and setting it as the HTTP Authorization header.

If you have only a single API key, you’ll end up with an HTTP Authorization header that looks like this: apikey:.

As you can imagine, this can lead to guessable API keys as an attacker can simply try every possible string until they ‘guess’ the correct API key. Depending on the length of your API key, this might make an attacker’s job very easy since an attacker only needs to guess your API key in order to make API requests on your behalf. For instance, let’s say your API key is 5 characters in length — this means an attacker could simply try every combination of characters until they guess your API key.

With two API keys (a username and a password), it is much harder for an attacker to ‘brute force’ your API keys as it will take much, much longer computationally to try that number of string permutations.

If you’re generating API keys in a sequential non-random way, your API will be abused. Using sequentially-numbered IDs can open a potential attack vector known as fusking.

For instance, I’ve personally seen APIs in the past where the API given out to developers is an incrementing number. So for instance, as a developer I might get assigned an API key that looks like this: 123456. As you can imagine, if I were to try 123455, I’d probably be using another account’s API key!

For this reason, even if you’re generating sufficiently random API keys, it’s highly recommended that you provide developers with an API key pair.

API keys should consist of a pair of unique, random numbers, also known as an ID and secret (as a rule of thumb, you should use a globally unique number for both your ID and secret).

In Node, you can generate globally unique API keys using the node-uuid library:

var uuid = require('node-uuid');

var keyPair = {
  id: uuid.v4(),
  secret: uuid.v4(),

Because an API key is a pair of two values, you can treat the id as a ‘username’ and the secret as a ‘password’. Assuming the user keeps the secret value secure, and treats it safely (like a password), this makes API keys as safe as username / password pairs, which means we can apply best practice techniques for passwords on API keys, too. This gives your users another layer of protection against brute force attacks, and makes brute forcing an API key pair much more difficult.

Other Mistakes

Another common mistake developers make when building API services is that they only allow a user to have one API key.

While this might sound like a decent idea when building your service, consider a common problem: what happens when a user accidentally leaks their API key and they need to quickly replace it?

What happens if a user has an API key pair compromised and needs to change it in all of their codebase(s)?

In situations like this, it’s impossible for your user to avoid downtime as they’ll have to remove their existing API key, and generate a new one (or, in many cases, contact you for help).

If your user must remove their existing API key before creating a new one, then a user could experience downtime from the moment the user disables their old API key until the moment they generate a new API key and deploy it live to all of their application servers.

Allowing users to generate more than 1 API key pair prevents this situation and enables your users to seamlessly recover from accidents. Once they’ve switched over to a new API key pair — without any downtime — they can safely destroy their original API key pair without effecting service.

Multiple API keys also allows developers to build more complex applications and billing functionality. For instance, tracking service usage via API is an enormous pain if you want to bill each end-user separately. Giving your developers multiple API keys gives them the ability to do smart billing and access control downstream — for their end users.

Basic Authentication

Now that we’ve covered API key best practices, let’s talk about HTTP Basic Authentication.

The way HTTP Basic authentication works is fairly simple: the request Authorization header is set with a value computed as follows:

var encodedData = new Buffer('apiKeyId:apiKeySecret').toString('base64');
var authorizationHeader = 'Basic: ' + encodedData;

This header then gets sent off to the API service, after which point a developer can then decode the credentials and authenticate them against the API key in the database using password hashing best practices.

Let’s take a look at a very basic Express app which simply prints off the HTTP Basic Authentication credentials received:

var express = require('express');
var app = express();
app.get('/', function(req, res) {
  if (!req.headers.authorization) {
    res.json({ error: 'No credentials sent!' });
  } else {
    var encoded = req.headers.authorization.split(' ')[1];
    var decoded = new Buffer(encoded, 'base64').toString('utf8');

      id: decoded.split(':')[0],
      secret: decoded.split(':')[1],


In the code above, we’re reading the HTTP Authorization header, then decoding the resulting string, and lastly — extracting the id and secret submitted.

To submit your credentials using HTTP Basic Authencation, let’s take a look at how this works using cURL:

$ curl -v http://localhost:3000
* Adding handle: conn: 0x7fc153803a00
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7fc153803a00) send_pipe: 1, recv_pipe: 0
* About to connect() to localhost port 3000 (#0)
*   Trying ::1...
*   Trying
* Connected to localhost ( port 3000 (#0)
> GET / HTTP/1.1
> User-Agent: curl/7.30.0
> Host: localhost:3000
> Accept: */*
< HTTP/1.1 200 OK
< X-Powered-By: Express
< Content-Type: application/json; charset=utf-8
< Content-Length: 32
< ETag: W/"20-1836064455"
< Date: Wed, 06 Aug 2014 20:17:21 GMT
< Connection: keep-alive
* Connection #0 to host localhost left intact
{"error":"No credentials sent!"}

Take a look at the HTTP headers we sent to the server. Notice how there is no Authorization header listed?

Now, let’s try again, but this time, we’ll submit our API credentials and see what happens:

$ curl -v --user 'hi:there' http://localhost:3000
* Adding handle: conn: 0x7fec13003a00
* Adding handle: send: 0
* Adding handle: recv: 0
* Curl_addHandleToPipeline: length: 1
* - Conn 0 (0x7fec13003a00) send_pipe: 1, recv_pipe: 0
* About to connect() to localhost port 3000 (#0)
*   Trying ::1...
*   Trying
* Connected to localhost ( port 3000 (#0)
* Server auth using Basic with user 'hi'
> GET / HTTP/1.1
> Authorization: Basic aGk6dGhlcmU=
> User-Agent: curl/7.30.0
> Host: localhost:3000
> Accept: */*
< HTTP/1.1 200 OK
< X-Powered-By: Express
< Content-Type: application/json; charset=utf-8
< Content-Length: 28
< ETag: W/"1c-59826226"
< Date: Wed, 06 Aug 2014 20:19:16 GMT
< Connection: keep-alive
* Connection #0 to host localhost left intact

As you can see, this time our request included an Authorization header that looked like this: Authorization: Basic aGk6dGhlcmU=. In this case, we successfully sent off our credentials using Basic Auth to our Express server!

By pairing SSL with Basic Authentication, you’re able to provide developers with a simple and reliable way to authenticate against your API service.

If you’re looking for a simple way to do API authentication, HTTP Basic Auth is an option but it comes with well-known security vulnerablities like credentials leakage, log file inspection, etc. If you’re looking for a secure way to do API authentication, then we highly recommend you use either OAuth (more on this below).

OAuth 2.0 with Bearer Tokens

If you care at all about security (and we hope you do) then OAuth 2.0 with bearer tokens is a great choice.

OAuth is a fairly complicated protocol. In exchange for added complexity, however, you give developers more power and control over API resources.

OAuth allows you to build an API where you can not only assign API keys, but also assign temporary access tokens to users.

Anyhow, OAuth is particularly useful to an API service where:

Here’s how OAuth 2.0 works:

Once the developer has exchanged their API key pair for an access token, they can then make API requests to your server using only an access token.

The way the token is submitted to the server is simple: a user must set an HTTP Authorization header that looks like: Bearer <token>, where <token> is the access token previously generated.

In the Node.js world, the simplest way to handle OAuth token generation / etc. is via the oauthorize library.

The Problem with Existing Tools

Since we’ve covered the basics about how API authentication typically works: what’s wrong with the existing tools out there? Good question!

There is really only one popular tool that makes building APIs possible in Express: PassportJS. Passport is an excellent tool — it’s a generic middleware app that comes with many third party backends to handle a wide variety of authentication services.

You can use passport-http in your Express app to handle Basic Authentication, and passport-http-oauth to handle OAuth 2.0 authentication.

While not exactly a Passport issue — building API services with Passport is still quite difficult as the API only allows you to authenticate API requests — but doesn’t provide any sort of authorization rules.

For instance, if you wanted to secure a simple API service with passport-http, you’d need to write the following code to instruct Passport on how to validate your API keys:

passport.use(new BasicStrategy(
  function(id, secret, done) {
    // connect to database and query against id / secret
    User.find({ id: id, secret: secret }, function(err, user) {
      if (err) {
        return done(err);
      } else if (!user) {
        return done(null, false);
      return done(null, user);

Once I’ve told Passport how to validate API keys, I can then ‘authenticate’ API requests like so:

app.get('/', passport.authenticate('basic', { session: false }), function(req, res) {

Not too bad — however, it’d be nice to have convenience methods that wrap this functionality to reduce typing or allow you to assert more complex user permissions in addition to basic validation.

Currently, in order to assert permissions you’ve got to write code to:

This becomes quite a bit more complicated when using OAuth — in addition to setting up the Passport middleware and writing access rules, you’ve also got to run your own OAuth token generation endpoints using oauthorize. All in all you’re looking at 100+ lines of code to do even the basics.

While I love PassportJS, it can get very complicated for API developers with clever wrappers.

How it Works Currently

If you’re currently building a REST API with Express and Passport, you’ve got several big tasks to complete before you can get up and running:

  1. Create a user model and persist it somewhere (a database server).
  2. Give users the ability to generate / remove multiple API key pairs.
  3. Setup your OAuth endpoints if you’re using OAuth.
  4. Setup PassportJS and write rules for authenticating users using either HTTP Basic Authentication or Oauth (both use cases require separate setup / configuration).
  5. Add the passport.authenticate middleware into each of your API routes to ensure Passport is successfully authenticating your users.

How Things Should Work in a Perfect World

In a perfect world, it’d be nice to have:

If you can assume that all users are the same, all permissions are stored the same way, and all API key pairs are stored the same way — it’s easy to build simpler wrappers around APIs.

In a perfect world, it’d be nice to be able to handle authentication via middleware in a simple way, for example:

// This would allow users to authenticate with either Basic Auth or OAuth
// 2.0 bearer tokens.
app.get('/', blah.ApiAuthenticationRequired, function(req, res) {
  res.json({ user: req.user, permissions: req.permissions });

Having a unified model would also allow you to do interesting things like this:

// Only users in the 'admins' group will be able to access this API
// endpoint.
app.get('/', blah.ApiGroupsRequired(['admins']), function(req, res) {
  res.json({ user: req.user });

This sort of functionality has been included in larger frameworks for a long time (Django, Rails, etc.) — and makes writing effective API services with these frameworks significantly easier.

Recap and Thoughts

API authentication is a hot topic in the Node world right now — lots of developers are writing API services, and many of these services are not being properly secured.

While using tools like Passport can help you secure your REST API, there’s a lot of room to improve the API security landscape by creating unified user models and associated tools to simplify the work you need to do.

At [Stormpath][], we’ve been working on our own library to help alleviate some of the pain developers currently face in this space. Our new express-stormpath library allows you to easily build out web apps and API services in Express.

It provides a unified user model, permissions model, and API key model — it also lets you easily create API keys, and supports built-in authentication via HTTP Basic Authentication or OAuth 2.0.

While it still has a long way to go, if you do end up checking it out, I’d love to hear your feedback:

Vittorio Bertocci - MicrosoftGetting Started with ADAL for .NET–Quick Video Tutorial [Technorati links]

September 08, 2014 10:03 AM

It’s been a loong time since I recorded a screencast, and I am definitely rusty – but I want to slowly ease back in the game.

Here you can find a quick and not-so-polished video tutorial on ADAL. Enjoy! Smile

September 07, 2014

Julian Bond [Technorati links]

September 07, 2014 02:06 PM

About a new analysis of the Club of Rome's models:

You decide. Can you spot the logical fallacies in each viewpoint?

Or is it simply that with exponential growth, if the resource limits don't get you the pollution will? Always assuming there's enough excess energy available to fund the continued exponential growth in the first place. And if there isn't then there are other problems with a global system that borrows from the future on the basis that exponential growth can continue indefinitely.

Bonus link:
 Limits to Growth is a pile of steaming doggy-doo based on total cobblers »
The Guardian praised it? Right, now we know for sure

[from: Google+ Posts]
September 06, 2014

Anil JohnPublic Sector Identity Assurance Guidelines and Standards [Technorati links]

September 06, 2014 02:00 PM

Identity assurance is a consistent requirement for how the public sector delivers services to its customers. While there are great many similarities in how global jurisdictions approach this, there are also interesting differences. This blog post provides pointers to some of those national standards and guidelines.

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.

These are solely my opinions and do not represent the thoughts, intentions, plans or strategies of any third party, including my employer.

September 05, 2014

ForgeRockForgeRock Shortlisted as 100 Most Promising Oracle Solution Providers [Technorati links]

September 05, 2014 08:37 PM

Today I woke up to delightful news.  CIOReview shortlisted ForgeRock  as one of the 100 Most Promising Oracle Solution Providers. That’s right, CIO “analyzed over  3500 companies providing solutions for various Oracle  products and … short listed companies that are at the forefront of tackling [Oracle] customer challenges.” On the one hand this is incredibly funny given ForgeRock’s long history being the anti-Oracle identity company. On the other hand, there is some serious truth to this award given the number of customers we’ve saved from legacy identity trauma. Can’t help but wonder what our acceptance speech would be like if we continued on in the process and won. Unfortunately, we’ll never know as the next hurdle involves a “nominal sponsorship of $3000.” How very Oracle-ish.

The post ForgeRock Shortlisted as 100 Most Promising Oracle Solution Providers appeared first on ForgeRock.

IS4USetting up Oracle Access Portal Service [Technorati links]

September 05, 2014 02:28 PM
While setting up Oracle Access Portal Service I ran into some issues, and came up with the following workarounds.

Issue #1 – Configuring Web Application Templates

Although not very clear, the documentation states that one must use de ESSO LM Administration console to configure web application templates and then either publish them directly to the LDAP repository, or export from ESSO LM Admin Console and import them back through the OAM Administration console.

None of these methods work! Once the application is published to the LDAP and you access it from the OAM admin console, the application is listed but if you try to access it the only thing that you will get is ADF exceptions. As for the export/import method, when you try to import the file from the OAM admin console, nothing happens, not even an ADF exception.

With this scenario the way to configure applications is to use the ESSO LM Admin Console to configure a Web Application, then create a new application in the OAM Admin Console, replicating the application settings defined in the ESSO LM Admin Console.

Issue #2 – Oracle Traffic Director Webgate

Oracle Access Portal Service works by injecting a javascript resource (columbiaWeb.js) into html pages, which then calls methods located in /idaas.
The requests made by columbiaWeb.js to the /idaas resources were coming back with HTTP error 405 - method not allowed. The HTTP method used to request these resources is GET, and included in the response was a list of allowed methods which included every HTTP method except for GET.

While investigating this issue I found that the Webgate has some hardcoded directives regarding /idaas.
I realized this by executing the command: strings /WebGate_HOME/webgate/iplanet/lib/|grep idaas
This command yelds the following output:

I tried many different approaches to solve this issue with the OTD webgate, including commenting the entry in the Oracle Traffic Director instance instance-obj.conf configuration file that pointed to the /idaas resource and getting the resource from OAM Server through other means, but was unsuccessful.

Eventually the solution I came up with is to use Apache HTTP Server instead of Oracle Traffic Director.

I installed and configured mod_webglogic in Apache so that I could map /idaas resources to the OAM Server and then copied OTDs columbiaWeb.js to the Apache Webgate folder /Webgate_HOME/webgate/apache/oamsso/global/.

Added the following entries to APACHEs configuration files where needed, to inject columbiaWeb.js:

AddOutputFilterByType SUBSTITUTE text/html
Substitute "s|</head>|<script type='text/javascript' id='OracleSSOProxy' essoLoggingLevel='0' src='/oamsso/columbiaWeb.js' oam_partner='Webgate_IDM_11g' essobasepath='http://oap.oam.demo' essoProxyType='DNS' essoConsoleLoggingLevel='0'></script></head>|i"

Added this entry to webgate.conf:

<LocationMatch "/idaas/*">
Satisfy any

This last entry unprotects the /idaas resource, not doing so will result in an empty json response and the following entry in the OAM server log: <ESSOTokenManager object is null. Session could not created and hence use-case can not move ahead. Returning the empty response back.>

Issue #3 – Dealing With Iframes

While configuring Gmail I realized that columbiaWeb.js was not handling iframes correctly so I created a new Javascript file based on columbiaWeb.js  and after this entry:

        var validFrames = this.getFrames();
        if (validFrames[0] === null) {
            if (global.oracleESSO.globals.logger.enabled(5)) global.oracleESSO.globals.logger.debug("matchTemplates end; No valid frames.");
            return 0;

I added this piece of code so that it would ignore iframes:

        var validFramesTmp = [];
        for (i =0; i < validFrames.length; i++) {
                if(validFrames[i] === window) {
                        validFramesTmp[i] =  validFrames[i];
         validFrames = validFramesTmp;

This separate Javascript file was created because I don't know the impact of this workaround in other configurations.

Unsolved Issues

While adding other forms to the configuration, for example a password change form, I get the following Javascript error in every form: global.oracleESSO.templateData.templates[matchedSections[0][prop].ParentKey1] is undefined
The fields get highlighted but there is no credential insertion. Hopefully this and other issues will be fixed in a future release.