February 08, 2016

OpenID.netNew OpenID Foundation Board Leadership [Technorati links]

February 08, 2016 07:00 PM

Thanks to all who voted for representatives to the OpenID Foundation Board of Directors. 

George Fletcher of AOL will begin a new two year term as the community member representative. His continued leadership on the Executive Committee ensures continuity on important initiatives like OpenID Connect Certification and his deep technical expertise will assist the new work groups that are in the pipeline.

Also, each year Corporate Members of the OpenID Foundation elect a member to represent them on the board. All corporate members were eligible to nominate, second and vote for candidates.  I am pleased to announce the election of Dale Olds to the OpenID Foundation Board. This will be Dale’s first time serving on the Board and he will bring a fresh perspective to complement his considerable technical and  business experience.

I’d like to thank Torsten Lodderstedt, of Deutsche Telekom, who has made important contributions during his tenure as a Director. Torsten has helped drive international outreach and led the resolution of complex security challenges. His continued Chairmanship of the MODRNA Working Group helps provide a clear path forward for OpenID Connect’s contribution to standards development on global mobile platforms.

I wanted to acknowledge all those who put themselves forward as candidates and those now serving. Board participation is a substantial investment of time and energy and requires diligent consensus building.

Please join me in thanking George and Dale as well as the other Directors for their service to the OpenID Foundation and the community at large.

February 06, 2016

OpenID.netRegistration Now Open for OpenID Foundation Workshop on Monday, April 25, 2016 [Technorati links]

February 06, 2016 12:44 PM

OpenID Foundation Workshops provide insight and influence on important internet identity standards.  The workshop provides updates on the adoption of OpenID Connect across industry sectors. We’ll review progress on OpenID Connect Certification and gather feedback for planned Relying Party certification. Work Group Leaders will overview the MODRNA (Mobile Profile of OpenID Connect) as well as other protocols in the pipeline like RISC, HEART, Account Chooser, Strong Authentication and the new Financial API profile. Leading technologists from Forgerock, Microsoft, Google, Ping Identity and others will lead the discussions to update key issues and discuss how they help meet social, enterprise and government Internet identity challenges.

This event precedes the IIW #22 Mountain View April 2016

Registration is at https://www.eventbrite.com/e/openid-foundation-workshop-prior-to-iiw-spring-2016-tickets-21016555082

The OpenID Foundation Workshop Agenda

Thank you to Microsoft for their directed funding support of this event.

Don Thibeau
The OpenID Foundation

February 05, 2016

Ludovic Poitou - ForgeRockWhat’s new in OpenDJ 3.0, Part III [Technorati links]

February 05, 2016 03:43 PM

FR_plogo_org_FC_openDJ-300x86In the previous posts, I talked about the new PDB Backend in OpenDJ 3.0, and the other changes with backends, replication and the changelog.

In this last article about OpenDJ 3.0, I’m presenting the most important new features and enhancements in this major release:

Certificate Matching Rules.

OpenDJ now implements the CertificateExactMatch matching rule in compliance with “Lightweight Directory Access Protocol (LDAP) Schema Definitions for X.509 Certificates” (RFC 4523) and implements the schema and the syntax for certificates, certificate lists  and certificate pairs.

It’s now possible to search a directory to find an entry with a specific certificate, using a filter such as below:

(userCertificate={ serialNumber 13233831500277100508, issuer rdnSequence:"CN=Babs Jensen,OU=Product Development,L=Cupertino,C=US" })

Password Storage Schemes

The PKCS5S2 Password Storage Scheme has been added to the list of supported storage schemes. While this one is less secure and flexible than PBKDF2, it allows some of our customers to migrate from systems that use the PKCS5S2 algorithm. Other password storage schemes have been enhanced to support arbitrary salt length and thus helping with other migrations (without requiring all users to have a new password).

Disk Space Monitoring.

In previous releases, each backend had a disk space monitoring function, regardless of the filesystems or disks used. In OpenDJ 3.0, we’ve created a disk space monitoring service, and backends, replication, log services register to it. This allows the server to optimise its resource consumption to monitor, as well as ensuring that all disks that contain writable data are monitored, and alerts raised when reaching some low threshold.


There are many improvements in many areas of the server: in the REST to LDAP services and gateway, optimisations on indexes, dsconfig batch mode, DSML Gateway supporting SOAP 1.2, native packages… For the complete details, please read the Release Notes.

As always, the best way to really see and feel the difference is by downloading and installing the OpenDJ server, and playing with it. We’re providing a Zip installation, an RPM and a Debian Package, the DSML Gateway and the REST to LDAP Gateway as war files.

Over the course of the development of OpenDJ 3.0, we’ve received many contributions, in form of code, issues raised in our JIRA, documentation… We address our deepest thanks to all the contributors and developers :

Auke Schrijnen, Ayami Tyndal, Brad Tumy, Bruno Lavit, Bernhard Thalmayr, Carole Forel, Chris Clifton, Chris Drake, Chris Ridd, Christian Ohr, Christophe Sovant, Cyril Grosjean, Darin Perusich, David Goldsmith, Dennis Demarco, Edan Idzerda, Fabio Pistolesi, Gaétan Boismal, Gary Williams, Gene Hirayama, Hakon Steinø, Ian Packer, Jaak Pruulmann-Vengerfeldt, James Phillpotts, Jeff Blaine, Jean-Noël Rouvignac, Jens Elkner, Jonathan Thomas, Kevin Fahy, Lana Frost, Lee Trujillo, Li Run, Ludovic Poitou, Manuel Gaupp, Mark Craig, Mark De Reeper, Markus Schulz, Matthew Swift, Matt Miller, Muzzol Oliba, Nicolas Capponi, Nicolas Labrot, Ondrej Fuchsik, Patrick Diligent, Peter Major, Quentin Cassel, Richard Kolb, Robert Wapshott, Sébastien Bertholet, Shariq Faruqi, Stein Myrseth, Sunil Raju, Tomasz Jędrzejewski, Travis Papp, Tsoi Hong, Violette Roche-Montané, Wajih Ahmed, Warren Strange, Yannick Lecaillez. (I’m sorry if I missed anyone…)

Filed under: Directory Services Tagged: directory, directory-server, ForgeRock, identity, java, ldap, opendj, opensource, release

Nat Sakimuraもう一つのHappy Birthday変奏曲 [Technorati links]

February 05, 2016 11:48 AM

さて、前回は のハッピバースデイ変奏曲をご紹介したわけですが、今日はもう一つ。こっちの方が一般に有名な曲に載せているので、よりウケるかもしれません。

カタール交響楽団の1周年のお誕生日を祝っての模様です。Peter Weiner作曲、『Variations on Happy Birthday to you』 です。

全曲は naxos で。


Variations on Happy Birthday to You http://ml.naxos.jp/work/300331

» Variation 1: Handel
» Variation 2: Mozart
» Variation 3: Beethoven
» Variation 4: Schubert
» Variation 5: Rossini
» Variation 6: Bruckner
» Variation 7: Wagner
» Variation 8: Bizet
» Variation 9: Johann Strauss Sohn
» Variation 10: Blues und Dixie
» Variation 11: Johann Strauss Vater
» Variation 12: Happy Birthday


Copyright © 2016 @_Nat Zone All Rights Reserved.

February 04, 2016

ForgeRockHello Privacy Shield, So Long Safe Harbor [Technorati links]

February 04, 2016 06:38 PM

(Note: this piece was written with much input from Eve Maler)

Reactions to the news out of Brussels earlier this week on the EU-US Safe Harbor negotiations were somewhat… polarized. In the “Love It!” camp were US negotiators and the Computer & Communications Industry Association, the main lobbying organ for Internet giants like Google, Facebook and Microsoft, which are all eager to get an agreement similar to Safe Harbor back in place. On the seething, enraged “Hate This! – We’ll See You In Court!” side were data privacy advocates like Max Schrems, the Austrian legal scholar who pursued the case that resulted in the overthrow of Safe Harbor late last year.

Safe Harbor

So, what was news exactly? Officially, the EU announced a new framework that will “protect the fundamental rights of Europeans where their data is transferred to the United States and ensure legal certainty for businesses.” With the name the EU-US Privacy Shield, the agreement at first glance appeared to be a resolution to the contentious international wrangling around personal privacy and the transfer of data between Europe and the US. But digging down, it’s apparent this was more of a “we’re announcing that we’ve agreed to agree on something, but we’re still working out the details.” Hence the heavy reliance on the word “framework” – which shows up eight times in the one-page press release.

It’s clear that the heavy lifting of working out exact provisions remains, and it’s possible that an agreement acceptable to both sides is still possible. But in the meantime, the reception to the announcement among data privacy activists inside and outside government was swift and negative. Jan Philipp Albrecht, member of the European Parliament, dismissed Privacy Shield as the same old same old: “This new framework amounts to little more than a reheated serving of the pre-existing Safe Harbor decision. The EU Commission’s proposal is an affront to the European Court of Justice, which deemed Safe Harbor illegal, as well as to citizens across Europe, whose rights are undermined by the decision.”

Schrems was all over Twitter and conducted multiple media interviews to lodge his distaste for Privacy Shield, predicting that if the final agreement looked anything like what was announced that the whole matter would end up back in court: “If this case goes back to the ECJ – which it very likely will do, if there is a new safe harbor that does not meet the test of the court – then it will fail again.”


What does this all mean for organizations with customers in Europe? Our take is that this debate is likely to continue for the foreseeable future. US interests obviously want to get back to the status quo, and EU negotiators seem willing to compromise, but privacy advocates are drawing a hard line, and they have a sympathetic ear at the ECJ. Remember that this entire argument arose through the Snowden disclosures that the US government was carrying out blanket surveillance on data traffic coming into US data centers. If the ultimate agreement doesn’t satisfy the privacy hardliners, we could be looking at months more of uncertainty. After all, the side that has previously lost the world’s trust will have to do some work to regain it, since proving a negative – that surveillance isn’t happening – is difficult.

In the press release we put out last week, our Eve Maler pointed toward a data privacy future built on individual consent as a way for enterprises to get out in front of, or even rise above, these ongoing debates: “Organizations looking to design personalized digital services that also respect an individual’s right to control access to their data will find that we have addressed that concern and can offer our customers a competitive advantage. Further, by designing services that offer this transparency and respect, organizations are also better able to address the implications of the emerging regulatory landscape.”

This kind of “go above and beyond” approach to data privacy is gaining traction from many quarters. Accenture put out a trend report recently that put it this way:

“To gain the trust of individuals, ecosystems, and regulators in the digital economy, businesses must possess strong security and ethics at each stage of the customer journey. And new products and services must be ethical- and secure by-design. Businesses that get this right will enjoy such high levels of trust that their customers will look to them as guides for the digital future.” (emphasis mine)

Sounds like good advice.

The post Hello Privacy Shield, So Long Safe Harbor appeared first on ForgeRock.com.

Ludovic Poitou - ForgeRockWhat’s new in OpenDJ 3.0 – Part II [Technorati links]

February 04, 2016 04:42 PM

FR_plogo_org_FC_openDJ-300x86Yesterday, I’ve talked about the most important change in OpenDJ 3.0, that is the new PDB Backend. Let me detail other new and improved features of OpenDJ 3.0, still related to backends and replication.

As part of the work for the new backend, we’ve worked on the import process, in order to make it more I/O efficient and thus faster.

Here’s some numbers, importing 1 000 000 users in OpenDJ.

In OpenDJ 2.6.3:

$ import-ldif -l ../1M.ldif -n userRoot
[03/Feb/2016:15:41:42 +0100] category=RUNTIME_INFORMATION severity=NOTICE msgID=20381717 msg=Installation Directory: /Space/Tests/Blog/2.6/opendj
[03/Feb/2016:15:42:54 +0100] category=JEB severity=NOTICE msgID=8847454 msg=Processed 1000002 entries, imported 1000002, skipped 0, rejected 0 and migrated 0 in 71 seconds (average rate 13952.5/sec)

In OpenDJ 3.0, with the JE Backend:

$ import-ldif -l ../../1M.ldif -n userRoot
[03/02/2016:15:45:19 +0100] category=UTIL seq=0 severity=INFO msg=Installation Directory: /Space/Tests/Blog/3.0/opendj
[03/02/2016:15:46:22 +0100] category=PLUGGABLE seq=74 severity=INFO msg=Processed 1000002 entries, imported 1000002, skipped 0, rejected 0 and migrated 0 in 62 seconds (average rate 15961.2/sec)

In OpenDJ 3.0, with the PDB Backend

$ import-ldif -l ../../1M.ldif -n userRoot
[03/02/2016:15:59:38 +0100] category=UTIL seq=0 severity=INFO msg=Installation Directory: /Space/Tests/Blog/3.0/opendj
[03/02/2016:16:00:38 +0100] category=PLUGGABLE seq=48 severity=INFO msg=Processed 1000002 entries, imported 1000002, skipped 0, rejected 0 and migrated 0 in 58 seconds (average rate 17038.7/sec)

We’ve also completely reworked the storage layer for the replication changes, moving away from the BDB JE database. Instead, we’re using direct files, again providing much smaller disk occupancy (and thus optimising I/Os) but also allowing much more efficient purging of old data.

As part of these changes, we’ve made serious improvements to the way the replication changes can be read and searched using LDAP under the “cn=Changelog” suffix. More importantly, we’ve now have a way to ensure a complete ordering of the changes published, and thus consistency of their “changeNumbers”. That is to say that now, when reading “cn=Changelog” on different replicated servers, the change with “ChangeNumber=N” will be the same on all servers, allowing applications that read these changes to failover from one server to another. We’ve added a way to resynchronise these ChangeNumbers when adding a new replica to an existing topology, or when restoring one after a maintenance period.

Still on the subject of the ChangeLog, we’ve added another level of security to it, by introducing a “changelog-read” privilege that provides a better control on which applications and users are allowed to read the data from the “cn=Changelog” suffix.

That’s all for today. Tomorrow, I will continue with all the other new features and enhancements in OpenDJ 3.0.

If you have done it yet, you can download OpenDJ 3.0 from ForgeRock’s BackStage and start playing with it. And check the Release Notes for more information.

Filed under: Directory Services Tagged: directory, directory-server, ForgeRock, ldap, opendj, opensource, release

Matthew Gertner - AllPeersExercise Tips when Trying for a Baby [Technorati links]

February 04, 2016 09:12 AM

Your health plays an important role when trying for a baby. Weight has a great impact on the success of your efforts. A study carried out by the National Institute of Health links high BMI (greater than 25) with lower birth rate in women undergoing fertility treatments.


While exercising is important for a healthy body and improving your chances of success, too much exercise can be counterproductive.

Not so hard

Those who like to go all out in the gym or when hitting the track would be well advised to ease up on exercising while trying for a baby. Vigorous exercise can actually decrease the rate of conception and therefore hinder your chances of success with fertility treatments.

If you have a regular workout regime that is designed for advanced or intermediate athletes, it may be time to tone it down a notch. Talk to your doctor about reducing your activity to improve your chances of success.

Workout less hours

If you spend more than four hours a week on vigorous exercise, then it’s time to take it easy. A study in 2006 showed that women who exercised for more than 4 hours a week reduced their chances of giving birth by 40%. These women were more likely to experience cycle cancellation and implantation failure. The study also showed increased chances of pregnancy loss in these women.

This is good reason to reduce your exercising hours in the week to less than 4 hours. Opt for three thirty minute sessions of low intensity exercise per week.

Opt for low impact exercises

High-impact activities can be beneficial for fat burning but can result in injuries that would hinder your efforts to have a baby. Any injuries or impact to the abdominal wall during this period should be avoided especially when a baby has been freshly implanted. Avoid high impact activities to improve your chances of pregnancy.

Take a break from exercise

During the week that you undergo egg retrieval, it’s important to take a break from exercise completely. That means even low impact activities. While this may seem impossible to the gym junkie, you probably won’t feel like exercising anyway. The medications given for fertility treatments have various side effects that will make you feel like taking a good long rest. These side effects include fatigue, bloating and mild discomfort.

It’s ok to take a break during this week. You should listen to the signals your body gives you to respond to them. Don’t push yourself, as this is a delicate time and you shouldn’t do anything to compromise the process.

Find an alternative form of stress relief

If exercising is your form of stress relief, then you may need to find another activity to take your mind off your troubles. Take up a new hobby or learn a new craft. Take time to foster relationships with your loved ones and enjoying your time with them. Read a book or take a walk.

Taking it easy during this critical journey will greatly improve your chances of success.


The post Exercise Tips when Trying for a Baby appeared first on All Peers.

February 03, 2016

Julian BondNone of those "Shine-on; Let’s do an SF anthology about positive futures!" people have ever approached... [Technorati links]

February 03, 2016 07:00 PM
None of those "Shine-on; Let’s do an SF anthology about positive futures!" people have ever approached Peter Watts, for some reason. I wonder why. Here's his typically scary take on Zika and other emerging infectious diseases (EID).
 No Moods, Ads or Cutesy Fucking Icons (Re-reloaded) » Viva Zika! »
There's this guy I knew, Dan Brooks. Retired now, an eminent parasitologist and evolutionary biologist back in the day. He did a lot of work on emerging infectious diseases (EIDs, for you acronym fetishists) down in Latin America. A few years back I wrote some introductory text for an online ...

[from: Google+ Posts]

Ludovic Poitou - ForgeRockOpenDJ 3.0.0 has been released… [Technorati links]

February 03, 2016 05:31 PM

FR_plogo_org_FC_openDJ-300x86As part of the release of the ForgeRock Identity Platform that we did last week, we’ve released a major version of our Directory Services product : OpenDJ 3.0.0.

The main and most important change in OpenDJ 3.0 is the work on the backend layer, with the introduction of a new backend database, supported by a new low level key-value store. When installing a new instance of OpenDJ, administrators now have the choice of creating a JE Backend (which is based on Berkeley DB Java Edition, as with previous releases of OpenDJ), or a PDB Backend (which is based on the new PersistIt library). When upgrading, the existing local backends will be transparently upgraded in JE Backends, but indexes will need to be rebuilt (and can be rebuilt automatically during the upgrade process).

Both backends have the same capabilities, and very similar performances. Most importantly, both backends benefit from a number of improvements compared with previous releases : the size of databases and index records are smaller, some indexes have been reworked to deliver better performances both for updates and reads. Overall, we’ve been increasing the throughput of Adding/Deleting entries in OpenDJ by more than 15 %.

But the 2 backends are different, especially in the way they deal with database compression. Because of the way it’s dealing with journals and compression, the new PDB backend may deliver better overall throughput, but may increase its disk occupancy significantly under heavy load (it favours updates over compression). Once the throughput is reduced under a certain threshold, compression will be highly effective and the overall disk occupancy will be optimised.

A question I often get is “Which backend should I use? “. And I don’t have a definitive answer. If you have an OpenDJ instance and you’re upgrading to 3.0, keep the JE Backend. This is a simple and automated upgrade. If you’re installing a new instance of OpenDJ, then I would say it’s a matter of risks. We don’t have the same wide experience with the PDB backend than we have had with the JE backend over the last 10 years. So, if you want to be really safe, chose the JE Backend. If you have time to test, stage your directory service before putting it in production, you might want to go with the PDB Backend. As, moving forward, we will focus our performance testing and improvements on the PDB backend essentially.

That’s all for now. In a followup post, I will continue to review the changes in OpenDJ 3.0…

Meanwhile, you can download OpenDJ 3.0 from ForgeRock’s BackStage and start playing with it. And check the Release Notes for more information.

Filed under: Directory Services Tagged: directory, Directory Services, directory-server, ForgeRock, identity, java, ldap, opendj, opensource, release
February 02, 2016

Nat SakimuraノキアがダメならHappy Birthdayはどうか? [Technorati links]

February 02, 2016 11:01 PM

さて、Ensemble Mega Ne、いくら良くてもファミマ+JRじゃ世界に出られないよなということで、昨日はノキアの着信音を考えてみたのですが、どう考えてもターレガの原曲にガチで挑むのは分が悪かろうということで終わりました。では、他にないのか?だれでも知ってるような曲が?あるじゃないですか、「Happy Birthday」。これなら誰でも知ってる。

でもねぇ、これ、有名な変奏曲があるんですよね。Peter Heidrich作曲のハッピバースデイ変奏曲(1935)です。全14変奏から鳴っておりまして、バロックから古典派、ロマン派、映画音楽、ジャズ、タンゴとたどり、最後はチャルダッシュで締めるという、西洋音楽の歴史をなめられるような構成になっております。


  1. テーマ (1:04)
  2. Variation 2 ハイドン風 (1:43) [String Quartet No. 62 in C Major, Op. 76, No. 3, Hob.III:77, “Emperor”]
  3. Variation 4 ベートーヴェン風 (3:03) [String Quartet No. 8 in E Minor, Op. 59, No. 2, “Rasumovsky”]
  4. Variation 13 タンゴ風(4:14)
  5. ハンガリー風カデンツァ一瞬ノキア
  6. Variation 14 チャルダッシュ風 (6:26)





なお、全曲盤はヴェニス弦楽四重奏団 – Venice String Quartetのものが Naxosにあります→ http://ml.naxos.jp/work/206829


Copyright © 2016 @_Nat Zone All Rights Reserved.

Nat SakimuraLet’s Encrypt で名前ベースのHTTPSバーチャルホスト(SNI)を設定してみる [Technorati links]

February 02, 2016 02:57 PM

Let’s Encryptは米国のInternet Security Research Group (ISRG)という財団が提供している、無償のTLSサーバ証明書提供サービスです。無償・自動・安全・透明・オープン・協調をモットーにするサービスで、財団自体は501(c)3の非営利団体[1]になっています。Technical Advisory Board のメンバーを見ると、Joe Hildenbrand とか Karen O’Donohue のようなお友達が並んでいますね。sakimura.org も、www.sakimura.org はTLS対応をしてきているのですが、証明書代をケチって&設定が面倒で、nat.sakimura.orgなどはTLS対応してきていませんでした。でも、こんなのが出てきたらもう言い訳できません。それでは、設定をしてみましょう。

Let’s Encrypt のインストール

設定するには、まず、Let’s Encryptをインストールしなければなりません。LinuxのDistributionによっては、パッケージとして提供されているようですが、今使っているUbuntuのリリースにはLet’s Encryptが入っていないので、まずはLet’s Encryptをインストールします。インストールには、gitを使って、githubからファイルを落としてくるところから始めます。

$ git clone https://github.com/letsencrypt/letsencrypt
$ cd letsencrypt
$ ./letsencrypt-auto --help

おっと、色々エラーが出ますね。どうもPythonが古いようです。今入っているのはPython 2.7なので、SNIをサポートしていなかったりなど色々問題有りということのようです。なので、Python 3 をインストールしてみます。私は、aptitude で python3.4-venv をインストールしました。


$ ./letsencrypt-auto --help

Updating letsencrypt and virtual environment dependencies......
Requesting root privileges to run with virtualenv: sudo /home/nat/.local/share/letsencrypt/bin/letsencrypt --help

  letsencrypt-auto [SUBCOMMAND] [options] [-d domain] [-d domain] ...

The Let's Encrypt agent can obtain and install HTTPS/TLS/SSL certificates.  By
default, it will attempt to use a webserver both for obtaining and installing
the cert. Major SUBCOMMANDS are:

  (default) run        Obtain & install a cert in your current webserver
  certonly             Obtain cert, but do not install it (aka "auth")
  install              Install a previously obtained cert in a server
  revoke               Revoke a previously obtained certificate
  rollback             Rollback server configuration changes made during install
  config_changes       Show changes made to server config during installation
  plugins              Display information about installed plugins

Choice of server plugins for obtaining and installing cert:

  --apache          Use the Apache plugin for authentication & installation
  --standalone      Run a standalone webserver for authentication
  (nginx support is experimental, buggy, and not installed by default)
  --webroot         Place files in a server's webroot folder for authentication

OR use different plugins to obtain (authenticate) the cert and then install it:

  --authenticator standalone --installer apache

More detailed help:

  -h, --help [topic]    print this message, or detailed help on a topic;
                        the available topics are:

   all, automation, paths, security, testing, or any of the subcommands or
   plugins (certonly, install, nginx, apache, standalone, webroot, etc)




./letsencrypt-auto –apache









本当はSecureを選ぶ(=HTTPSオンリーにする)べきであるような気もするんですが、そうすると、過去記事のせっかく集めたFacebook「いいね」などのカウントがゼロになってしまって寂しいので、Easy (=HTTP, HTTPS両方対応)を選びます。

(図3)完了画面。QUALIS SSLLabs でテストするようにそくされる

(図3)完了画面。QUALIS SSLLabs でテストするようにそくされる


すると、たったそれだけでTLSの証明書のインストールとWebサーバの設定完了です。スゴイ。Apacheのconfigを色々書かなきゃと思っていたのですが、全部自動でやってくれたようです。/etc/apache2/sites-enabled を見ると、cimbalom.jp-le-ssl.conf のような「ドメイン名-le-ssl.conf」という設定ファイルができています。これを見る限りでは、複数のバーチャルホストを持っている場合、そのHTTP版の設定ファイルを使って、SSL用の設定ファイルを自動生成してくれるようです。


最後の仕上げは、QUALISのサイトに行って、TLSの設定の確認です。上の図にあるように、 https://www.ssllabs.com/ssltest/analyze.html?d=sakimura.org のように指定して、状況をテストします。このサイトのテスト結果はこんな感じでした。

sakimura.org の QUALIS SSL Report

(図4)sakimura.org の QUALIS SSL Report




今回初めてLet’s encrypt を使ってみて、その便利さに感動しました。これまでのTLSの設定は何だったのかという感じです。もはや、TLSを使わない言い訳はあり得ないでしょう。

一方、これを便利に使いこなすには事前に準備も必要だなぁと思いました。その代表例は、サーバとしてアクセスさせることを意図していなくても、sakimura.org のようなドメイン名自体のWebサーバも設定しておいてLet’s Encryptを走らせたほうが良いということです。(この点、QUALISに「Confusing」といわれました。)Let’s Encryptで取得する証明書の有効期限は3ヶ月弱ですから、次回までにこの点を改善しようと思います。自動更新はまたその後ですね。



Copyright © 2016 @_Nat Zone All Rights Reserved.

KatasoftSecure Password Hashing in Node with Argon2 [Technorati links]

February 02, 2016 09:09 AM

Storing passwords securely is an ever-changing game. For the past few years (2013 –> 2015), Jean-Philippe Aumasson has been running a world-renowned Password Hashing Competition in which security researchers submit, validate, and vet the best password hashing algorithms.

Just recently, the competition wrapped up, naming Argon2 king of the hashing algorithms.

This is good news for any web developer out there: you can now use the Argon2 algorithm to more securely store your user’s passwords!

Today I’m going to show you how you can easily hash your user’s passwords using the Argon2 algorithm, and introduce you to some best practices.

Argon2 Recommendation

In the crypto community, standards are created by holding open competitions in which researchers analyze various cryptographic algorithms, slowly looking for faults, weeding out the weaker submissions.

This is how Argon2 (pdf) was born.

The Password Hashing Competition yielded 24 separate hashing algorithm submissions, which was eventually pruned down to 9 finalists, out of which Argon2 was named the winner.

What does this mean for normal web developers? If you’re starting a new project, or looking to improve security on an existing project, you should immediately start using the Argon2 hashing algorithm to securely store all of your user passwords.

If you’re into C, you can look at the reference Argon2 implementation on Github here: https://github.com/P-H-C/phc-winner-argon2

Using Argon2 in Node.js

As of just a few weeks ago, you can now get started using Argon2 in Node, thanks to efforts by Ranieri Althoff.

First up, you need to install the argon2 npm package:

$ npm install argon2

Next, you need to install (globally) the node-gyp library — this is required as it allows the argon2 library to use the underlying C implementation of the Argon password hashing implementation to function:

$ npm install -g node-gyp

Once this is done, you need to ensure your project is a Git repository — this is required because of the way the underlying Argon2 library is being used. To ensure you’re in a Git repository, you can run the following command:

$ git init

Next up, you’ll need to import the argon library:

var argon = require('argon2');

Now, let’s say you receive a password from a user and want to securely hash the password for storage in a database system, you can do this using the hash method:

NOTE: We’ll also let the argon2 library generate the cryptographic salt for us as well in a secure fashion.

argon.generateSalt((err, salt) => {
  if (err) {
    throw err;

  argon.hash('some-user-password', salt, (err, hash) => {
    if (err) {
      throw err;

    console.log('Successfully created Argon2 hash:', hash);
    // TODO: store the hash in the user database

Now, after you’ve stored a user’s Argon2 hash, how can you log a user in using the plain text password to verify it is correct? Using the verify method, of course!

argon.verify('previously-created-argon-hash-here', 'some-user-password', err => {
  if (err) {
    console.log('Invalid password supplied!');
  } else {
    console.log('Successful password supplied!');
    // TODO: log the user in

And that’s pretty much it, easy right?!


If you’re building new webapps, you should check out the NPM argon2 module for securely hashing your user passwords. It’s the new recommended password hashing algorithm, so don’t be afraid to get your hands dirty!

Also: if you’re looking for a service that makes creating users and storing user data really simple and secure, be sure to go sign up for Stormpath! We’re free to use, and provide all sorts of awesome features, including transparent password hashing upgrades, secure user data data storage, and lots more.

We even have an amazingly simple and powerful express library that makes building secure websites and API services a few lines of code =)

February 01, 2016

Nat Sakimuraファミマのテーマじゃ世界に出られないよな、やっぱノキアでしょ?! [Technorati links]

February 01, 2016 05:33 PM

さて、Ensenble Mega Ne の「ファミリーマートの主題による変奏曲【木管四重奏+ピアノ】」やら「山手線の主題による対位法的楽曲」にいたく感心しておりましたのですが、世界に出るとなると、ファミマやJRのテーマじゃなぁ、というのがあります。対象聴衆が狭すぎます。ここは一発、誰でも知ってるメロディでやらなきゃというわけで、何が良いか。




これなら、Ensemble Mega Ne勝てるでしょう。ぜひやってほしいなと思いますが、もう一つとっても良いやつがありましたよ。あの有名なギター曲「アルハンブラの思い出」を書いた、19世紀後半から20世紀初頭にかけてのスペインの大作曲家、フランシスコ・ターレガ(1852-1909)のグラン・ヴァルス(大円舞曲)です。え?「19世紀にノキアの携帯電話があったのか」ですって?あるわけ無いでしょ。ノキアの携帯電話の着信音が、このグラン・ヴァルスから旋律をとったんです。



Copyright © 2016 @_Nat Zone All Rights Reserved.

Matt Pollicove - CTIYou've read my ramblings, now listen to them! [Technorati links]

February 01, 2016 04:19 PM
You've been reading my ramblings for years here and on SCN. Now you have a chance to listen to some of my thoughts on SAP IDM and a bit on SAP IDM 8 I was recently interviewed by long time colleague and fellow IDM Expert,Scott Eastin for his IDM Masters Interview Series

Please take a moment to listen to the interview and support Scott's efforts!

BTW, please let me know if this is interesting and if we should consider a regular podcast / YouTube discussion of SAP IDM, along with topics that you would like to see covered!


Julian BondJames Bridle's SciFi short about a post mass-data world. [Technorati links]

February 01, 2016 02:27 PM
Reading James Bridle's SciFi short about a post mass-data world[1]. Recommended by Bruce Sterling[2]. Leading to re-visiting Bridle's blog and a piece about 5-eyes surveillance[3]. And his short film of a CGI walk-through of UK immigrant detention centres[4]. And another piece about the Space Blanket as a A Flag For No Nations (or perhaps a Flag Of No Nation)[5]. All while listening to Fatima Al Qadiri - Brute, a soundtrack for 21st century protest[6]. 

[1] http://motherboard.vice.com/read/the-end-of-big-data
[2] http://www.wired.com/beyond-the-beyond/2016/01/the-end-of-big-data-a-science-fiction-story-by-james-bridle/
[3] http://booktwo.org/notebook/hyper-stacks-post-enlightenment/
[4] http://booktwo.org/notebook/seamless-transitions/
[5] http://booktwo.org/notebook/a-flag-for-no-nations/
[6] http://www.factmag.com/2016/01/20/fatima-al-qadiri-new-album-brute-battery/
http://thequietus.com/articles/19578-listen-new-fatima-al-qadiri [7]
[7] Coincidentally, the image for Brute is a TellyTubby wearing riot police gear. And while it's obviously the purple TinkyWinky famously outed by Jerry Falwell for being a closet gay, It's got Po's circular aerial and not TinkyWinky's triangle. https://en.wikipedia.org/wiki/Teletubbies#Tinky_Winky_controversy. Except that actually it's Joe Kline's Po-Po [8]
 The End of Big Data | Motherboard »
It's the world after personal data: all identifying information is illegal. No servers, no search records, no social, no surveillance. A pair of satellites, circling the planet, make sure the data cen…

[from: Google+ Posts]

Ludovic Poitou - ForgeRockNo more data about you without you ! [Technorati links]

February 01, 2016 08:05 AM

Here’s what we think and DO about Privacy at ForgeRock !

ForgeRock – Privacy from ForgeRock Videos on Vimeo.

Filed under: Identity Tagged: ForgeRock, privacy, uma, video

KatasoftGetting Started with SAML in PHP Applications [Technorati links]

February 01, 2016 08:00 AM


Rolling out your own SAML integrations has always been hard. It’s a complex implementation that’s difficult to build securely and efficiently whether you’ve built it before or not. Here at Stormpath, we’ve rolled out a simple solution to enable SAML in your applications without custom code!

With only a few steps, you can integrate SAML into your PHP project. No matter which identity provider you are using, the code will remain the same. We can even help you with a no-code SAML experience with our ID Site integration. In this tutorial, we’ll walk you through how to enable SAML support for easy setup and usage of your favorite SAML provider with the Stormpath PHP SDK.

What Is SAML?

SAML (Security Assertion Markup Language) is an XML-based standard for securely exchanging authentication and authorization information between entities—specifically between identity providers, service providers, and users. Well-known IdPs include Salesforce, Okta, OneLogin, Shibboleth. Your apps are the SAML service providers, and the Stormpath API makes it possible to integrate them with the IdPs (but without headaches).

Set Up Your SAML Provider for PHP Applications

Configuration of a SAML provider is stored under a directory in Stormpath. This means the first step will be setting up a directory as a SAML directory. As a refresher, setting up a directory using the Stormpath PHP SDK looks like this:

$directory = \Stormpath\Resource\Directory::create([
  'name' => 'My directory',
    'description' => 'My directory description'

This will create a basic cloud directory in the Stormpath system. To make this a SAML directory, we have to first instantiate a SAML provider then attach it to a directory. To instantiate a SAML provider, you use the following code:

$samlProvider = \Stormpath\Resource\SamlProvider::instantiate([
    'ssoLoginUrl' => '',
    'ssoLogoutUrl' => '',
    'encodedX509SigningCert' => ,
    'requestSignatureAlgorithm' => 

Once you have your provider instantiated, you then just add one item to the directory creation. As part of the properties, add a provider item and make it equal to the instantiated provider.

$directory = \Stormpath\Resource\Directory::create([
  'name' => 'My SAML Provider Directory',
    'description' => 'My SAMP Provider Directory description',
    'provider' => $samlProvider

Now you have a SAML provider you can work with for your application.

Configure Your Service Provider In Your Identity Provider

Your Identity Provider (IdP) needs to understand some information about the Service Provider. We’ve simplified this process by providing a url that you can either give to your IdP, or gather the data from the url which includes all the information needed. To access this information, you have to get it from the directory provider.

$provider = Stormpath\Resource\SamlProvider::get(self::$directory->provider->href);
$providerMetaData = $provider->serviceProviderMetadata;

This will give you a few items that you need to provide to your IdP. For some IdP’s you can give them the href that is provided as part of this response and they know how to handle it. We follow the standard format for the meta data. Other IdP’s will require you to get some of the information from here and provide it to them. The information you will need from the response follows:

Configure Your Application To Allow Callbacks

Configuring your application is an important part of the process. You will need to tell your application what urls it is allow to have the IdP send your users back to. These urls should be hosted by you. We have made it just like updating any other property on your application.



Map SAML Attributes To Stormpath

You may be asking yourself now about how Stormpath knows what values to use when putting the account information into the stormpath directory. This is all done from the Attribute Mapping Statements. IdPs will provide you with an XML that has data that you can pull from.

    <saml:Attribute Name="uid" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic">
      <saml:AttributeValue xsi:type="xs:string">test</saml:AttributeValue>
    <saml:Attribute Name="mail" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic">
      <saml:AttributeValue xsi:type="xs:string">jane@example.com</saml:AttributeValue>
      <saml:Attribute Name="location" NameFormat="urn:oasis:names:tc:SAML:2.0:attrname-format:basic">
      <saml:AttributeValue xsi:type="xs:string">Tampa, FL</saml:AttributeValue>

In this response you can see that there is a list of attributes and in each attribute is a Name and NameFormat. These are two values that you can use to map it to an attribute in Stormpath. In the above example let’s use the following mapping:

When mapping these attributes, you can use either name or nameFormat, or both. Stormpath will match as much as you provide to get the attribute value and map it to the correct account field. To set up this mapping, run the following:

$provider = \Stormpath\Resource\SamlProvider::get($directory->provider->href);

$ruleBuilder = new \Stormpath\Saml\AttributeStatementMappingRuleBuilder();
$rule = $ruleBuilder->setName('uid')

$rule2 = $ruleBuilder->setName('mail')
$rule3 = $ruleBuilder->setName('location')

$rulesBuilder = new \Stormpath\Saml\AttributeStatementMappingRulesBuilder();
$rulesBuilder->setAttributeStatementMappingRules([$rule, $rule2, $rule3]);
$rules = $rulesBuilder->build();



Once these are mapped, Stormpath does just in time provisioning of the account so we are always synced with the IdP. Stormpath will see the xml that is returned from the IdP and find the attributes that you want to map and map them to what you defined them to.

How to Use Your SAML Provider for PHP Applications

Now that you have set up your SAML provider in your application, you can now use it for logging in. For this example, we are going to make a request to ID Site. Build a login request and redirect the user to ID Site:

$application = \Stormpath\Resource\Application::get($applicationHref);

$redirectTo = $application->createIdSiteUrl(['callbackUri'=>'{CALLBACK_URI}']);

header('Location: ' . $redirectTo); //or any other redirect method you want to use.

Once here, You should see your SAML Provider as an option on the login screen. Login

Further Reading

Stormpath is really excited to offer SAML as part of the PHP SDK. We have full documentation on our github page where you can go further into detail. We will soon be offering SAML support inside of our integrations as well. Let us know what you think about our SAML support.

Feel free to drop a line over to email anytime.

Like what you see? to keep up with the latest releases.

Anil JohnBuilding a Bridge Across the Research Valley of Death [Technorati links]

February 01, 2016 01:30 AM

In the business of research and development, the valley of death is the place where potentially great science and technology goes to die on the way to the marketplace. I’ve been thinking a lot about how one goes about building a bridge across this valley. Here are some readings I found to be of value and some of my thoughts on this topic.

Building a Bridge Across the Research Valley of Death

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.

The opinions expressed here are my own and do not represent my employer’s view in any way.

January 30, 2016

Nat Sakimuraファミリーマートの主題による変奏曲【木管四重奏+ピアノ】 [Technorati links]

January 30, 2016 10:24 PM

これは今日のヒット。2016/1/25にアップされたビデオで、5日で15万回近く見られています[1]Ensemble Mega Ne の作品です。


2016/01/25 に公開

1. テーマ
2. モノフォニー
3. 対位法
4. バロック時代のソナタ
5. モーツァルト
6. ベートーヴェン
7. ハイドン
8. 古典派のレチタティーヴォ
9. ウェーバー
10. シューベルト
11. ショパン
12. ドヴォルジャーク
13. ブラームス
14. ムソルグスキー
15. ドビュッシー
16. ラヴェル
17. サティ
18. ラフマニノフ
19. ストラヴィンスキー
20. ドラマのエンディング
21. 映画予告編



Copyright © 2016 @_Nat Zone All Rights Reserved.

January 29, 2016

KatasoftParse is Shutting Down. What's Next for Your Backend? [Technorati links]

January 29, 2016 06:00 PM

Like many other mobile app developers, I was really sad to hear that Parse is shutting down. I use Parse quite heavily for some of my personal projects, and I appreciate how Parse enables mobile-first teams to build apps at lightning speed. They also did a phenomenal job at sunsetting their product.

As developers look to migrate off Parse to their own infrastructure, we’ve started to field some questions about how Stormpath is different to Parse, and how developers can leverage Stormpath in their apps.

Many developers see Parse as a useful way to prototype and ship an app quickly, but plan to migrate to their own backend as they gain traction. Parse starts to work against you as you scale. Because of the Parse shutdown, it’s time to evaluate if you should deploy a copy of Parse Server, evaluate another Backend as a Service (BaaS) provider, or migrate your backend off Parse.

Deploy Parse Server

Deploying Parse Server might seem like the easiest option for now. Even though it’s a nontrivial amount of work, you can keep your backend up and running without trouble. However, this can cause problems if you’re not careful.

What many people don’t realize is that Parse Server is not Parse’s infrastructure open-sourced. Instead, it’s a “Parse Compatible” server to keep your Parse apps running. In the migration guide, Parse says “there may be subtle differences in how your Cloud Code runs in Parse Server”. As a developer, I’m not excited to find and fix these “subtle differences”.

Also, Parse Push isn’t one of the open-sourced components, so you’ll still have to look into deploying your own push notification server, and rewrite your logic around that.

Deploying Parse Server, while a great stopgap mechanism, doesn’t look like a winning solution in the long term, and really only looks useful for small personal projects or apps in maintenance mode.

Migrate to Another BaaS Provider

While a clear equivalent isn’t clear just yet, many BaaS providers are already rushing to build migration tools that’ll help you move your application to their service. Even without Parse, other BaaS providers offer a similar value proposition — you’ll be able to build your apps quickly without worrying about the backend.

However, I can’t imagine the transition to be very easy. Just like the edge cases I’m worried about with Parse’s open source server, there will be even more with migrating to another BaaS. Features may not be exactly the same, with some missing and some working in different fashions. As Parse was super feature-rich, I would expect many headaches trying to learn and rewrite your backend for another BaaS provider.

Several entrepreneurs have taken this as an opportunity to build their own Parse-compatible backends, or deploy a hosted version of Parse Server. I’m optimistic about this; I love Parse and would probably use one of these services for my side projects. However, would you want to use this for your business? It’d be difficult to evaluate the trustworthiness of these startups; some of them might shut down less gracefully with Parse, and where would you be then?

Build Your Own Backend

For any mobile app with a sizable amount of traction, I think this makes the most sense.

Many developers have already planned to move off Parse as they grow. Now is the right time to evaluate if you should build your own backend. Utilizing a BaaS provider in the long run limits your ability to build an application tailored to your needs, as you start pushing up against the limits of the built-in data models and design patterns, as well as performance issues related to their architecture.

Having made the decision to build your own backend, you’ll now need to figure out your authentication and authorization schemes. Perhaps you’ve decided on a service-oriented architecture, and you’ll need multiple backend services (possibly written in different languages) to access the same user data. Or, you need to move beyond sessions to using tokens so you can have a scalable backend that works across multiple services & platforms.

Writing that boilerplate for your authentication mechanisms takes valuable time which could be used for building actual features in your app. Do you want to spend the time building, deploying, and maintaining yet another service?

Why Stormpath?

Stormpath can help your team write a secure, scalable application without worrying about the nitty gritty details of authentication, authorization, and user security. We have easy SDKs for popular languages and frameworks including Express.js so that adding a authentication for your application can be done in as little as 15 minutes, and can be shared across multiple teams, web applications, and APIs. And if the need arises, your data is still yours and you’ll always be able to migrate off Stormpath.

In summary, Stormpath helps backend developers build better applications faster by dealing with the hard issues of building a rock-solid authentication system. If you’re considering building your own backend now that Parse is closing, I hope you consider Stormpath for your needs.

Try it now! Sign up for an account and try one of our quickstarts to see how easy it is to begin using Stormpath, or contact us to learn about how we can help migrate your user data to Stormpath.

Like what you see? to keep up with the latest releases.

Ludovic Poitou - ForgeRockNew version of ForgeRock Identity Platform™ [Technorati links]

January 29, 2016 01:35 PM

This week, we have announced the release of the new version of the ForgeRock Identity Platform, which brings new services in the following areas :


This is also the first identity management solution to fully implement the User-Managed Access (UMA) standard, making it possible for organizations to address expanding privacy regulations and establish trusted digital relationships. See the article that Eve Maler, VP of Innovation at ForgeRock and Chief UMAnitarian posted to explain UMA and what it can do for you.

A more in depth description of the new features of the ForgeRock Identity Platform has also been posted.

The ForgeRock Identity Platform is available for download now at https://www.forgerock.com/downloads/

In future posts, I will detail what is new in the Directory Services part, built on the OpenDJ project.

Filed under: Identity Tagged: access-management, Directory Services, ForgeRock, identity, Identity Relationship Management, opendj, platform, release, security, uma

Ludovic Poitou - ForgeRockNouvelle version de la Plateforme Identité de ForgeRock [Technorati links]

January 29, 2016 09:46 AM

Cette semaine nous venons d’annoncer la nouvelle version de la Plateforme d’Identité de ForgeRock (ForgeRock Identity Platform™).


La Plateforme d’Identité de ForgeRock est maintenant capable d’évaluer dans son contexte et en continu, l’authenticité des utilisateurs, des appareils et des objets.

Cette nouvelle version est aussi la première solution qui offre le support de la norme “User Managed Access” (UMA) qui permet aux individus de partager, contrôler, autoriser et révoquer l’accès aux données de façon sélective, et donc offrent aux entreprises une solution ouverte et standardisée pour protéger et contrôler la confidentialité des données de leurs clients et employés. Ces besoins de confidentialité et de gestion du consentement deviennent importants dans le domaine de la santé, des objets connectés ou même dans le secteur des services financiers.

Pour mieux comprendre “UMA” et les services offerts par la Plateforme d’Identité de ForgeRock, je vous propose de regarder cette courte vidéo (en Anglais).

La plateforme ForgeRock Identity Platform est disponible en téléchargement dès à présent à l’adresse : https://www.forgerock.com/downloads/

Les détails des nouveautés de cette version sont sur le site de ForgeRock.

Filed under: InFrench Tagged: ForgeRock, identité, identity, opensource, plateforme, platform, release, uma

Matthew Gertner - AllPeersThe Business of Avoiding Identity Theft [Technorati links]

January 29, 2016 01:19 AM

Business owners have enough to fret about on a daily basis, so needing to deal with hackers is something they would rather avoid.

With that being said, you are seeing more and more in the news about identity theft thieves aka cyber-criminals getting away with breaking into business websites. When such activities occur, business owners can be left with financial issues and of course public relations nightmares.


While there are different tricks of the trade to lower the odds your business site will fall victim to identity theft thieves, some of these tricks ultimately boil down to just plain old commonsense.

Be Pro-Active in Thwarting Cyber-criminals

If you are a business owner who is concerned about identity theft thieves making a play for your website, keep these tidbits in mind:

Running a business in 2016 and beyond certainly does come with its share of risks. In order to improve the chances that your brand will steer clear of hackers and ultimately identity theft, make sure you are a pro-active business owner and not a reactive one.


About the Author: Dave Thomas covers business security issues on the web and how to protect your brand against identity theft.

The post The Business of Avoiding Identity Theft appeared first on All Peers.

January 28, 2016

Mike Jones - MicrosoftOAuth Discovery metadata values added for revocation, introspection, and PKCE [Technorati links]

January 28, 2016 07:23 PM

OAuth logoThe OAuth Discovery specification has been updated to add metadata values for revocation, introspection, and PKCE. Changes were:

The specification is available at:

An HTML-formatted version is also available at:

Mike Jones - MicrosoftIdentity Convergence and Microsoft’s Ongoing Commitment to Interoperability [Technorati links]

January 28, 2016 12:44 AM

OpenID logoPlease check out this important post today on the Active Directory Team Blog: “For Developers: Important upcoming changes to the v2.0 Auth Protocol”. While the title may not be catchy, it’s content is compelling – particularly for developers.

The post describes the converged identity service being developed by Microsoft that will enable people to log in either with an individual account (Microsoft Account) or an organizational account (Azure Active Directory). This is a big deal, because developers will soon have a single identity service that their applications can use for both kinds of accounts.

The other big deal is that the changes announced are a concrete demonstration of Microsoft’s ongoing commitment to interoperability and support for open identity standards – in this case, OpenID Connect. As the post says:

The primary motivation for introducing these changes is to be compliant with the OpenID Connect standard specification. By being OpenID Connect compliant, we hope to minimize differences between integrating with Microsoft identity services and with other identity services in the industry. We want to make it easy for developers to use their favorite open source authentication libraries without having to alter the libraries to accommodate Microsoft differences.

If you’re a developer, please do heed the request in the post to give the service a try now as it approaches General Availability (GA). Enjoy!

January 27, 2016

ForgeRockThe ForgeRock Identity Platform – What’s New? [Technorati links]

January 27, 2016 12:00 PM

The ForgeRock Identity Platform – What’s New?

Today we are introducing our latest version of the ForgeRock Identity Platform, designed to be a unified solution that supports and enhances any of your digital services and digital transformation initiatives. It offers end-to-end identity and access management capabilities, which scale into the billions of identities, and support you not just now, but for years into the future.

In particular, the new release includes features that help address the complex challenges of data privacy and protection as well as emerging security challenges of the IoT.  As more digital services and the Internet of Things (IoT) devices come online, there is a greater need for a higher level of dynamic security and that is responsive to changing circumstances and proactively protects uses, devices and sensitive data.

“ForgeRock’s focus on highly scalable, business-to-customer identity management technology differentiates it in a sector still concerned primarily with traditional business-to-business offerings. Building identity management platforms around the customer rather than the business delivers significant benefits in terms of user experience, security and privacy online,” said Rik Turner, senior analyst on Ovum’s Infrastructure Solutions Team. “The implementation of the UMA standard in ForgeRock’s new platform will put unprecedented levels of data control into the hands of the consumer. This approach will help build greater trust online by ensuring that consumer privacy is front and center of the online experience.”

The platform is built from open source projects, namely from OpenIDM, OpenAM, OpenIG, OpenDJ, and OpenUMA.

There are numerous improvements and new capabilities in the ForgeRock Identity Platform, we’d like to highlight some of them here.

What’s New in Common Services

In this release of the ForgeRock Identity Platform, we are introducing some new Common Services and enhanced some existing ones. Having shared or common services across all components of a platform does not only benefit how we develop product, but especially how you extend functionality and deploy the platform. These include:

What’s New in Identity Management?

ForgeRock’s Identity Management solution, built from the OpenIDM project, allows you to manage the complete identity lifecycle of users, devices, and things. From identity to device registration, provisioning, synchronization, reconciliation, and more, your users and customers can move between devices. New capabilities include:

What’s New in Access Management?

ForgeRock Access Management, built from the OpenAM project, let’s you use one solution to cover all your flexible and secure access needs: for users, devices, things, applications, and services. New capabilities include:


What’s New in Identity Gateway?

ForgeRock Identity Gateway, built from the OpenIG project, provides a flexible policy enforcement point to support your current environment while migrating towards a modern, standards-based platform, letting you connect digital assets across your ecosystem, with minimal-to-no changes.


What’s New in Directory Services?

ForgeRock Directory Services, built from the OpenDJ project, is rethinking how data is stored with massive data scale and high availability, providing developers with ultra-lightweight ways to access customer identity data, in order to personalize services and transform how customers engage with the world. It was designed to provide and manage digital identities across platforms. New capabilities include:


Where to find more information?

ForgeRock Identity Platform:

Common Services: https://www.forgerock.com/platform/common-services
Access Management: https://www.forgerock.com/platform/access-management
Identity Management: https://www.forgerock.com/platform/identity-management
User-Managed Access: https://www.forgerock.com/platform/user-managed-access
Identity Gateway: https://www.forgerock.com/platform/identity-gateway
Directory Services: https://www.forgerock.com/platform/directory-services
How To Buy: https://www.forgerock.com/platform/how-buy/


Open Source Projects:

OpenAM: https://forgerock.org/openam/
OpenIDM: https://forgerock.org/openidm/
OpenUMA: https://forgerock.org/openuma/
OpenIG: https://forgerock.org/openig/
OpenDJ: https://forgerock.org/opendj/


Other Resources:

Resources: https://www.forgerock.com/resources/


These are just some of the highlights.

In summary, the latest release of the ForgeRock Identity Platform includes new features that aid organizations in addressing the growing demand for control over data. The platform also provides improved security measures that give organizations a more convenient and reliable way to address the security challenges of our connected world.


The post The ForgeRock Identity Platform – What’s New? appeared first on ForgeRock.com.

ForgeRockUser-Managed Access: A new tool for privacy, consent, and building trusted digital relationships [Technorati links]

January 27, 2016 12:00 PM

The news: ForgeRock’s new Identity Platform is out, and I couldn’t be more thrilled that our conforming implementation of the User-Managed Access (UMA) standard is a part of this release. The ForgeRock Identity Platform is the first identity management platform to support UMA for consumer consent and data sharing — scenarios that are near and dear to my heart.

What is UMA about? Before talking about UMA, let me muse on the complex topic of privacy for a moment. The following Venn diagram represents key elements that contribute to privacy.umaimage

There is aspirational language in Fair Information Practice Principles, Privacy by Design about “transparency” and “control” and such; this is what the autonomy aspects of privacy are about. Individuals have the right to be left alone, to make decisions unmolested by pressure, to make health decisions with a full set of data at their command, and so on.

However, most businesses and most regulations have — understandably — focused on data protection (what privacy is generally called in Europe!), and on risk mitigation. In other words, privacy compliance has largely been about “ensuring that data doesn’t get out.” As a result, tools for consent have been limited.

But new forces are coming into play. Consumers have become more skeptical and cynical; the regulatory landscape is now shifting, as we have seen with Safe Harbor and the emerging EU General Data Protection Regulation regime; the Internet of Things is pushing data volumes and sources to new heights; and in the face of all this, businesses are keen to build trusted digital relationships with their customers.

UMA’s design center can be expressed simply: It’s an OAuth-based protocol designed to give an individual a unified control point for authorizing who and what can get access to their digital data, content, and services, no matter where all those things live. The resulting architecture makes UMA a strong basis for tools and solutions for building trusted digital relationships, both from the perspective of addressing externally imposed consent requirements and from the perspective of demonstrating trustworthiness to customers.

What ForgeRock has been up to: Our amazing product teams have built impressive support for the hardest UMA scenarios: those serving consumers and citizens. Our out-of-the-box user interface for the central sharing management console (the “UMA authorization server”) offers value-add features such as: 1) Share buttons so that end-users can choose to delegate selective access to IoT devices, online identity data, and other digital resources, and 2) pending-request pages so that end-users can selectively approve access requests made to those resources. Additionally, API endpoints let developers customize interfaces, flows, and policy conditions as required for business needs.You can also UMA-protect your services with our gateway component. Further, some of our partners have been gaining experience building POCs for a variety of use cases.

Memory lane: My work on the protocol goes back to early 2008, before there was an UMA, so I have a lot of history with this work. I worked with colleagues at Sun Microsystems on a predecessor called “ProtectServe,” and then gathered with like-minded people at the Internet Identity Workshop series of events (whose 22nd meeting ForgeRock will be sponsoring again in April!) to begin planning a proper standardization effort. Ultimately we took the project to the Kantara Initiative; the Version 1.0 specifications were approved in March 2015 and Version 1.0.1 just last month.

Final thoughts and thanks: As founder and chair of the UMA Work Group, as well as in my role driving privacy and consent innovation for ForgeRock, I have been and continue to be blessed to work with an amazing bunch of UMAnitarians inside and outside the company. Up next: Deployments in various verticals.

Here’s to more “context, control, choice, and respect” for users’ data in 2016!

Eve can be found on Twitter at @xmlgrrl

The post User-Managed Access: A new tool for privacy, consent, and building trusted digital relationships appeared first on ForgeRock.com.

Julian BondThe only possible response is "Oh dear". And that's what they want you to say. [Technorati links]

January 27, 2016 08:49 AM
The only possible response is "Oh dear". And that's what they want you to say.


[from: Google+ Posts]
January 26, 2016

Matthew Gertner - AllPeersFeeling Blah? Ways to Cultivate A Positive Outlook on Life [Technorati links]

January 26, 2016 06:10 PM

Working on creating a positive outlook on life can combat depression ... photo by CC user Andrew Mason on wikimedia commons

Whether it’s the culmination of the holidays, a cold winter or just feeling blah, climbing out of a rut can be difficult to do. Fortunately, the following suggestions can help you cultivate a positive outlook on life, while making you feel better about yourself.

Do Something for Others

Everyone suffers a case of the Mondays on occasion. However, if you’re having difficulties getting out of a funk, you may need to get proactive. While it may only be for selfish purposes, doing something good for someone else can boost your life’s outlook just as much as it will benefit the person in need. If you’re feeling low, push yourself to help others. If you’re an animal lover, volunteer to walk dogs or play with the cats from a shelter. Are you handy? You could sign up to build houses for Veterans who were left handicapped or homeless after serving their country. If you lack friends or are feeling lonely, try volunteering at a senior center. Here you can spend time playing games or listening to those who have a lot of wisdom to impart on the younger generation.

Change Your Appearance

In addition to being happy with who you are as a person, feeling good can also come with an improved appearance. If you’ve kept the same hairstyle for years, it’s time for a new color or cut. For those who have always worn eye glasses, a simple change to contact lenses can totally change your look. Even better, quality contact lenses may not be as expensive as you would think, especially when you buy contacts online.

Be a Good Citizen

Whether you want to help your local community or improve the environment, simple gestures of good citizenship can improve your spirits. Recycling old magazines or repurposing the furniture and goods in your home can be excellent ways to help save the planet. If you’re looking to help others, you could also sign up with your state driver’s license facility to be put on the organ donor list.

Keep Your Resolutions Small

If you’ve made New Year’s resolutions to become a millionaire or to quit consuming sugar in the coming year, you may be setting yourself to fail before you’ve begun. Setting smaller goals will make you happier and allow you to feel like you’re accomplishing something. Small tasks can include items such as cleaning the garage, going through your old clothing, working out for 10 minutes each day or limiting yourself to ice cream once each week instead of daily.

Find Something You Enjoy

Everyone has tasks to do that they are less pleased to take part in. However, if you enjoy reading, cooking, spending time with your family or pet, make it a point to do it every day. By spending time doing the things that you love most, you’ll become a happier and healthier you.

Stay Active

Losing your zest for life can make you lethargic, dull and instill feelings of unworthiness. If you’re looking to kick start your outlook on life, find things that can get you off the couch and moving. For an energetic boost, try taking a brisk walk outdoors. Even on a cold winter’s day, the scenery, sunlight and fresh air can be exhilarating. Encourage a friend or family member to join you on your trek.

Challenge Yourself

One of the best ways to get out of a rut is to challenge yourself. If you favor amusement parks, take a ride on the tallest roller coaster. You can also take up a sport such as surfing or white-water rafting. Whatever gets you out of your comfort zone may help lift your mood. You can also challenge yourself by facing a fear. If you’ve always been leery of speaking in public, push yourself to do less intimidating feats such as joining a club where you can make new acquaintances or asking someone on a date that you’ve always been fond of.

The post Feeling Blah? Ways to Cultivate A Positive Outlook on Life appeared first on All Peers.

KatasoftThe Fundamentals of REST API Design [Technorati links]

January 26, 2016 06:00 PM

Here at Stormpath, we give developers easy access to user management features – including authentication, authorization, and password reset – via our REST API and language-specific SDKs. Our team of experts began with already-significant knowledge about REST+JSON API design. In the process of building the REST API, they learned even more. Our CTO, Les Hazelwood, gave a well-received presentation to a group of Java developers on REST+JSON API design best practices, which you can watch here:

We’ve also written posts on how best to secure your REST API, as well as linking and resource expansion in REST APIs. This post will give a high level summary of the key points that Les touches on in his talk – specifically the fundamentals of good REST+JSON API design.

So Why REST?

Keeping the goal of rapid adoption of an API in mind, what makes RESTful APIs so appealing? Per Dr. Roy Fielding’s thesis on the REST paradigm, there are 6 distinct advantages of REST:

  1. Scalability – not necessarily its performance, yet rather how easy it is for RESTful APIs to adapt and grow and be plugged into other systems.

  2. Use of HTTP – being able to use HTTP methods to manage resources makes RESTful APIs easy to plug into other applications.

  3. Independency – with a REST API you can deploy or scale down specific parts of the application, without having to shut down the entire application or an entire web server form.

  4. Reduced latency due to caching – REST APIs prioritize caching, which helps to improve latency. So always keep caching top of mind when you’re developing your REST API.

  5. Security – HTTP specification lets you sport security via certain HTTP headers, so you can leverage this to make your API secure.

  6. Encapsulation – there are parts of the application that don’t need to be exposed to a REST caller, and REST as an architectural style allows you to encapsulate those gritty details and only show things that you need to show.

And Why JSON?

  1. Ubiquity – over 57 percent of all web-based applications have JSON, are built on JavaScript, or have JavaScript components.

  2. Human readable – it uses very simple grammar and language, so a human can easily read it, including folks just starting to get into software development.

  3. It’s easy to change or add new fields.

What makes REST design difficult?

RESTful APIs are difficult to design because REST is an architectural style, and not a specification. It has no standard governing body and therefore has no hard and fast design rules. What REST does have is an interpretation of how HTTP protocol works, which allows for lots of different approaches for designing a REST API. While use of HTTP methods is a core advantage of the REST approach, it also means that there are lots of different RESTful API designs. We’re going to focus on some of the best guidelines that we’ve come up with in designing our REST API.

REST API Design Guidelines

1. Keep your REST API resources coarse grained, not fine grained

Basically, you don’t know how your user is going to interact with your resources. In the case of Stormpath, resources would be accounts, groups, or directories. There are lots of different actions they might run on those resources and if they are adding in lots of arguments to methods they’ve written for a particular resource, it can be difficult to manage. So we recommend that, for a given resource in your REST API, you write a method that takes the resource itself as an argument, and the method contains all the functionality needed for said resource.

How else do you keep things coarse grained? You work with collection and instance resources. A collection resource is what it sounds like – basically a folder containing similar resources. An instance resource is a singular instance of its parent resource. This allows you to use HTTP behavior that affects an end point definition (an instance resource), but doesn’t actually create another url for each instance-behavior combination. So don’t add behavior to the actual end point definition.

2. Use POST to take advantage of partial updates in your REST API

Since REST APIs run on standard HTTP methods, you can use PUT or POST to either create or update resources. You might think of using PUT to create a resource and POST to update a resource, but you can actually use POST for both, and that’s recommended.

Why would you want to use POST to both create and update a resource?

Because with POST you don’t need to send over all fields for that data resource on every call you make, whereas with PUT, you do. This is important because if for every update you make you are also sending over fields that are not updating, then your data plan is consuming more than it needs to. Using POST instead of PUT can be beneficial if your REST API is metered as it limits the quantity of traffic. Furthermore, when you get into millions or hundreds of millions of requests per month, that impacts your REST API’s performance – so use POST to limit the traffic.

And why can’t you do the same with PUT?

Because per HTTP specification, PUT is idempotent, meaning it has to have all its properties included (or the result will not be the same). For example, if you first create an application without specifying a description, and then in a fourth call you send in the description, the state may have been different in between, therefore breaking the idempotency mandate.

3. Have REST API documents link to other documents based on the notion of a media type

A media type is a specification of a data format and a set of parsing rules associated with that specification. With your REST API, if you’re writing as a client, be sure to include your preferred data format you would like returned in the accept-header. Likewise, as the server, return back a content-type header that notes how the data is actually being returned. You can also add additional parsing rules to whatever data type you’re using. For example, you might have media type application/JSON+foo which tells the client not only is this JSON formatted data, but it’s also foo, which tells the client how to parse that data.

So make sure to send through accept headers specifying what media type you want (if on the client side), send through content-type headers telling the client what data format you are returning (if on server side), and take advantage of customizing your own media types to make your rest API more flexible for your clients.


In this post we’ve covered the advantages of the RESTful API design approach, as well as the fundamentals for creating a developer-friendly REST API. This is merely a summary of Les’ points, and only the first 30 minutes at that, so be sure to check out the rest of the presentation.

Like what you see? to keep up with the latest releases.

Julian BondI'd like to propose an extension to Betteridge's Law. "Any headline that ends in a question mark can... [Technorati links]

January 26, 2016 03:19 PM
I'd like to propose an extension to Betteridge's Law. "Any headline that ends in a question mark can be answered by the word no." https://en.wikipedia.org/wiki/Betteridge%27s_law_of_headlines 

"Any headline that contains 'could' can be answered by the words, Probably not."

eg "This $500 bamboo bicycle could be a key to reliable, affordable, and sustainable transportation". Hmmm!?! Probably not.
[from: Google+ Posts]

WAYF NewsCph Uni to change eduPersonPrincipalNames on March 1, 2016 [Technorati links]

January 26, 2016 01:48 PM

On March 1, 2016 at 10 o'clock, all users at the University of Copenhagen ('KU') will receive new values for their eduPersonPrincipalName ('ePPN') and eduPersonTargetedID ('ePTID') attributes.

Web services identifying their users by ePPN or by ePTID from WAYF will thus be unable to recognise KU users following that time.

If being able to recognise existing KU users by ePPN is business critical to a service provider, WAYF, cooperating with KU, offers to exchange all KU ePPN values currently known to the service into the corresponding new ones.  Service providers interested in this must contact Mikkel Hald of the WAYF Secretariat to make an arrangement.

Regrettably, it is not possible to have current ePTID values originating from KU changed into corresponding new ones.

Service providers affected by KU's change of ePPN, ePTID values can contact Mikkel Hald of the WAYF Secretariat if they have any questions on the matter.

January 25, 2016

Julian BondWhy is David Cameron's "Greenest Government Ever" turning away from investment in renewable infrastructure... [Technorati links]

January 25, 2016 07:40 AM
Why is David Cameron's "Greenest Government Ever" turning away from investment in renewable infrastructure? Especially after Paris last year. We were doing quite well there for a while.

Did they make too many promises to their fossil fuel friends in the fracking industry? 


There's something strange going on here.

There's a broad mix of opposition politicians, energy companies and green groups asking the same questions.

Last July, the UK government started to roll back support for renewable energy, citing forecasts of cost overruns and the need to keep down household bills.

New projections showed the Levy Control Framework (LCF) cap for low-carbon support would be breached, the government said. Yet it has never published the details of its updated calculations, despite the multi-billion pound implications for the direction of UK energy investment.

 A pioneer in renewable energy is inexplicably abandoning the cause »
Leader no more.

[from: Google+ Posts]
January 23, 2016

Mark Dixon - OracleHave Fun Today – Throw a Frisbee! [Technorati links]

January 23, 2016 02:56 PM

On this day, way back in 1957, the first patch of Frisbees were produced by Wham-O toy company! Who can imagine life without Frisbees? Have fun!



Julian BondNeo-Optimist Statistics [Technorati links]

January 23, 2016 10:54 AM
Beware neo-optimists (and pessimists) using "TED-Stats" to sell you a story. It starts with complicated but pretty graphics that combine 4 different things via multiple confusing axis scales mixed in with colour and different sized spots. Then notice that the graphics are only available as part of a youtube video of a powerpoint presentation with no citations or links to the underlying data. Dig deeper and you'll find the following common statistical strategies.   

1) Excessive accuracy. For instance total global population to an accuracy of 1000 in 7,000,000,000. Or total CO2 production against global GDP when this is hugely dependent on figures from China and India. One example is the de-coupling carbonisation story that requires a pause in CO2 production during a continued rise in GDP. Except that both figures just got restated for the last 10 years by 10 times the small change you're looking for. And with China the figures are coming out of a hierarchy with a huge incentive to lie about them. And as Eris knows, truth in communication is very hard in strongly hierarchical systems where self-serving lies are the norm.

2) Mistaking percentages for absolute figures. Total global population has been following linear growth for 4 decades with each successive billion taking about 12-13 years to add. If anything the time to add a billion is shortening from 15 to 12 years. But if you measure the growth as a percentage of the total, of course the percentage growth is dropping. You can sell this story as "Population growth has been dropping for 4 decades". Then extrapolate this drop in percentage out as a straight line and you can argue that it will drop to zero and so population will peak. Except that the linear growth is still happening and we're still adding a billion every 12 years. Even the most reliable source of figures, the UN, can simultaneously predict a drop in population growth due to fertility rates and also adjust the date we get to 9b closer by 6 years. So which is it; is growth slowing down or is it speeding up?

There's a similar kind of statistical lying in the story about CO2 production flat-lining. So the total yearly emissions in GtC/y didn't increase so the year on year rate changed by 0% or went down by 0.1%. Great. Except that yearly emissions were already at a record high and will be pretty much the same next year.

3) Focussing on detail and sub-sections instead of the holistic big picture. The classic example here is to argue that because the USA and developed world reduced their onshore fossil fuel use or pollution or CO2 output then things are getting better. If everybody could do this, then we'd solve the climate change problem. But this ignores the global picture that the developed world reduced it's emissions by outsourcing all its manufacturing to the developing world. And we outsourced all our pollution as well. Not only that but the shipping involved is a major polluter as well.

Now add all this together and attempt to argue that things are getting better because the numbers of people in extreme poverty are reducing rapidly due to capitalist globalisation. See, the percentage of people in extreme poverty is coming down year on year! If we keep this up we can wipe out extreme poverty by 2030. Add in a lot of hand-waving arguments about average incomes, education, fertility rates, urbanisation to justify that. But, wait. At the same time as the percentage was dropping we were adding 1b every 12 years. How come there were 1000m in extreme poverty in 1820 and still 1000m in 2010? And it's essentially the same people in subsistence farming in the poorest countries that are classified as in extreme poverty. Didn't we achieve anything in the last 200 years of progress? Clearly mankind has achieved an astonishing amount in the last 200 years and if nothing else we've added 5.5b people who are NOT IN EXTREME POVERTY. That's a cause for optimism as long as you don't think too hard about the implications of continuing to add more non-poor people to the mix indefinitely. But what we haven't done is to reduce the absolute number who are in poverty. Except maybe we have. Look closely and it's possible that the absolute numbers have been dropping a little this century. But be extremely wary of the accuracy of that reporting.

It makes you wonder. Does anyone know WTF is going on and can describe it accurately without bias and an agenda?

Bonus links
 World Poverty — Our World in Data »

[from: Google+ Posts]
January 22, 2016

Julian BondIn preparation for The Crisis, read this [Technorati links]

January 22, 2016 11:02 AM
In preparation for The Crisis, read this

while listening to this

And remember Hawkwind's immortal words,
"Think only of yourself".

The key is to freak out early and freak out often (FEFO) in an agile way, and work towards a lifestyle that (ideally) feels like one continuously integrated and deployed mid-life crisis. There is actually good intellectual justification for approaching life this way. It’s called the Lindy effect, which says you’ll live as long again as you already have, until you don’t.

Which means you’re always at mid-life. Until you’re not.
[from: Google+ Posts]

KatasoftHow to: Secure Connected Microservices in Spring Boot with OAuth and JWTs [Technorati links]

January 22, 2016 09:09 AM

If you’re a developer who, like myself, loves Microservices for their flexibility and scalability then you’ve probably run into this challenge:

How can you easily scale your application while maintaining the security and efficiency of service-to-service communications?

Microservices consist of many independent processes communicating with each other over an API. The keyword there is many. All of these processes need to exchange information to perform complex tasks, and each communication exposes your application to vulnerabilities and latency.

In this post, I’m going to show you how to secure service-to-service communications using OAuth and JWTs. We’ll use a Spring Boot app consisting of two Stormpath-backed services. In this example, you authenticate to the first service, which calls the second service to get a response.

Then, we’ll speed things up by reducing the number of calls over the wire with a distributed cache. Caching FTW!

Jot This Down: Use JWTs for property packed tokens

At the heart of the service-to-service communication are JWTs (pronounced “jots”). JWT stands for JSON Web Token and is an open source standard. The Stormpath Java SDK makes use of the open source JJWT library for anything that requires a token. This is everything from our outward facing OAuth2 integration, to our own SSO service called ID Site.

JWTs can function purely as tokens. That is, a string of letters and numbers with no inherent meaning that services make use of. For instance, in an OAuth authorization workflow, the server will create an access token and store it on the Account for future lookup. End users will present the token to identify themselves and the server will do a lookup to confirm that it matches what is has on file.

However, JWTs are so much more. This is because there can be meaningful information encoded into a JWT. This information is called “claims”. There are a number of default claims, including issuer, subject and others. The standard supports custom claims – meaning, you can encode any type of text based key/value pair you want into a JWT. This enables us to work with tokens in a very powerful way. For instance, a JWT may carry information on what level of access a user has to your application.

Most importantly, JWTs can be digitally signed. This makes it possible to ward against man in the middle attacks as you can verify a JWT in code using standard signing and verification algorithms.

It’s also worth noting the JWTs can be encrypted. This is a step beyond signing. Today, most JWTs are signed, but are not encrypted. This kind of JWT can be easily decoded and, as such, you would never have sensitive information in a JWT of this type. With encryption, new possibilities for passing sensitive data within a JWT arise. Upcoming releases of the JJWT library will include support for encryption.

An exhaustive look at JWTs is beyond the scope of this post. However, this blog post goes into a lot of detail.

Secure Microservice Calls Between Friends

We want to be absolutely sure that our two Microservices can trust each other. JWTs to the rescue!

By using a shared key to sign the JWT that we pass from one service to the other, we can be confident in trusting the JWT by verifying the signature. We can pack whatever information we want to communicate from one service to the other right into the JWT.

Building JWTs with the Java Fluent Interface

JwtBuilder jwtBuilder = Jwts.builder()
    .setIssuedAt(new Date(System.currentTimeMillis()))
    .setExpiration(new Date(System.currentTimeMillis() + (1000 * 60)))

String secret = client.getApiKey().getSecret();
String token = jwtBuilder.signWith(
    SignatureAlgorithm.HS512, secret.getBytes("UTF-8")

Let’s break this down, line by line.

The JJWT library provides a fluent interface for building JWTs.

Line 1 – and each succesive method call – returns a JwtBuilder object.

Lines 2 – 5 put information into the JWT using the standard claims, subject, issued at, expiration and id. These claims have short names that are specified in the standard. These are sub, iat, exp, and id. The subject is the key bit of information we want to send to the second service. It identifies the Account of the authenticated user by href.

Line 7 is crucial to the way this all hangs together securely. We need a shared key so that both Microservices can verify the authenticity of the JWT that’s passed in. The Stormpath Client already has access to its API Key Secret. The combination of the API Key ID and the API Key Secret are what allows your application to securely communicate with Stormpath. We can reuse the API Key Secret as the shared secret between the services.

Lines 8 – 10 sign the JWT using the Stormpath Client’s API Key Secret. The call to compact returns the final encoded String representation of the JWT. This is what we will pass across the wire from one service to the other.

A Look at the Guts of a JSON Web Token

Here’s an example of what the above code produces (newlines added for clarity):


The middle string is the payload. Decoded, the payload looks like this:

  "sub": "https://api.stormpath.com/v1/accounts/3jizQjQQsBTsukENmcuWKj",
  "iat": 1452699151,
  "exp": 1452699211,
  "jti": "ece29f22-ddb7-490b-8461-a9661c1f5b03"

Microservices In Action

Let’s take a look at our two services working and then we’ll look at some more of the code.

Stormpath Admin Console Configuration

In order for the example code to work, you’ll need to configure your Stormpath Application and create some Accounts. First, create a Stormpath account here.

In your Stormpath Admin Console, do the following:

  1. Create an Application and make note of the href
  2. Create a Directory and map it to the application
  3. Create a Group in the Directory and make note of the href
  4. Create a number of Accounts and add at least one to the Group from the previous step

For this example, I’ve created these accounts:

  1. nick.fury@avengers.com
  2. iron.man@avengers.com
  3. captain.america@avengers.com
  4. hulk@avengers.com
  5. thor@avengers.com

And, I’ve added nick.fury@avengers.com to the admin group.

Here’s what it looks like in the admin console:



Admin Group:




Admin Group Accounts:


Microservices Launchpad – Right from Your Commandline

Now, let’s fire up our Microservices and see it in action! Note: There is one application codebase, but we are going to start two servers running on different ports. One of the servers will connect to the other as you will see below.

  1. Build the application:

    mvn clean package

  2. Start the first server (default port 8080):

     java -jar target/*.jar

  3. Start the second server (port 8081):

     java -Dserver.port=8081 -jar target/*.jar

We’re going to interact with the first server right from the command line using curl. We’ll authenticate using a standard OAuth2 workflow called grant_type=password. For more information on OAuth, checkout this excellent blog post. We even have a podcast devoted to the topic!

First, we’ll get an access token.

curl -X POST \
  -H "Origin: http://localhost:8080" \
  -H "Content-Type: application/x-www-form-urlencoded" \
  -d "grant_type=password&username=nick.fury@avengers.com&password=Passw0rd$" \

Notice that I am passing in the username and password. In a real production environment, you would want this to be an https connection. In fact, Stormpath insists on it!

Here’s what the response looks like:


Look familiar? Yup – the access_token is a JWT.

Next, as part of the OAuth flow, we will use the access token we got in the previous step as a Bearer token for our service.

curl \
  -H "Authorization: Bearer eyJhbGciOiJIUzI1NiJ9.eyJqdGkiOiJhODMxMTEyNC05MzAxLTQwODEtOTU1NC01OWVmM2M4NmRiNGYiLCJpYXQiOjE0NTI3MDc3MTMsInN1YiI6Imh0dHBzOi8vYXBpLnN0b3JtcGF0aC5jb20vdjEvYWNjb3VudHMvNjloRTNZZ3hobzhRUFpiNXNyWWI4cyIsImV4cCI6MTQ1Mjk2NjkxM30.tl-73bkrofE0QxjCoMVGQNZ1JFv9GyXQlfMgT2_4w8Y" \

And, here’s the response:

    "emails": [

Maybe not much to look at, but behind the scenes, we’ve achieved service-to-service communication! Huzzah!

Before we look deeper into the code, let’s look at what happens if we try the same thing with an account that’s not an admin. As a convenience, the code repo has a bash script that asks for your email and password and does the two curl commands for you.

email: hulk@avengers.com

    "message":"You must be an admin!"

The system works!

Code Walkthrough: Combining Spring Boot, OAuth & JWT

In the pom.xml file, there are three primary dependencies: The Stormpath Java SDK, the JJWT library and the Apache HttpClient library. We use the HttpClient library to connect from one service to the other.

Here’s the first service endpoint from MicroServiceController.java:

public @ResponseBody AccountsResponse accounts(HttpServletRequest req) throws Exception {

    if (!isAuthenticated(req)) {
        throw new UnauthorizedException("You must authenticate!");

    Account account = AccountResolver.INSTANCE.getAccount(req);

    // create a new JWT with all this information
    JwtBuilder jwtBuilder = Jwts.builder()
        .setIssuedAt(new Date(System.currentTimeMillis()))
        .setExpiration(new Date(System.currentTimeMillis() + (1000 * 60)))

    String secret = client.getApiKey().getSecret();
    String token = jwtBuilder.signWith(
        SignatureAlgorithm.HS512, secret.getBytes("UTF-8")

    return communicationService.doRemoteRequest(remoteUri + REMOTE_SERVICE_ENDPOINT, token);

Lines 4 – 6 are making sure that the incoming request is from an authenticated user. The Bearer token in the request ensures this.

Lines 11 – 20 we saw earlier. That’s there we are creating our signed JWT.

Line 22 calls a Spring @Service that makes the request of our second server.

Here’s what that service looks like. The file is CommunicationService.java:

public class CommunicationService {
    public AccountsResponse doRemoteRequest(String url, String token) throws Exception {
        // make request of other micro-service
        GetMethod method = new GetMethod(url + "?token=" + token);
        HttpClient httpClient = new HttpClient();
        int returnCode = httpClient.executeMethod(method);

        BufferedReader br = new BufferedReader(
            new InputStreamReader(method.getResponseBodyAsStream())
        StringBuffer buffer = new StringBuffer();
        String line;
        while(((line = br.readLine()) != null)) {

        ObjectMapper mapper = new ObjectMapper();
        return mapper.readValue(buffer.toString(), AccountsResponse.class);

Lines 5 – 16 make use of the Apache HttpClient to make an HTTP request to our other server.

Lines 18 and 19 take advantage of the Jackson JSON serialization/deserialization features built into Spring. It’s taking the raw text String returned from our second server and deserializing that into the AccountsResponse object.

Here’s the second service endpoint from MicroServiceController.java:

public @ResponseBody AccountsResponse microservice(@RequestParam String token) throws Exception {

    if (!Strings.hasText(token)) {
        throw new UnauthorizedException("Missing or Empty token!");

    // verify jwt
    String secret = client.getApiKey().getSecret();
    Jws<Claims> claims =

    String accountHref = claims.getBody().getSubject();

    Account account = client.getResource(accountHref, Account.class);

    return adminService.buildAccountsResponse(account);

Lines 9 – 11 verify the JWT using the API Key Secret from the Stormpath Client. Notice that we are again using the Stormpath Client API Key Secret. It’s our shared secret. If the signature on the JWT is not valid, an Exception is thrown when parseClaims is called.

Line 17 calls a Spring @Service that builds our response.

Here’s AdminService.java:

public class AdminService {
    @Value("#{ @environment['stormpath.admin.group.href'] }")
    String adminGroupHref;

    Application application;

    public boolean isAdmin(Account account) {
        return account.isMemberOfGroup(adminGroupHref);

    public AccountsResponse buildAccountsResponse(Account account) {
        AccountsResponseBuilder accountsResponseBuilder = AccountsResponseBuilder.newInstance();
        if (isAdmin(account)) {
            List<String> emails = getAllEmails(application);
        } else {
            accountsResponseBuilder.status(STATUS.ERROR).message("You must be an admin!");

        return accountsResponseBuilder.build();

    private List<String> getAllEmails(Application application) {
        List<String> emails = new ArrayList<String>();
        for (Account acc : application.getAccounts()) {
        return emails;

Line 15 ensures that the Stormpath Account passed in is a member of the admin Group. If so, then we build a list of email addresses. If not, then we set the status to ERROR.

The important thing is that an AccountsResponse object is always returned. This makes for a consistent API.

Distributed Cache: Spring Boot Makes It Pluggable

Now that we have our services speaking to each other, let’s see what optimizations we can make.

The Stormpath Java SDK comes with a configurable cache out of the box. That cache is really only designed to be used within a single JVM.

The cache subsystem is completely pluggable and, in conjunction with the Spring Boot integration, it’s super easy.

Let’s look at the network interaction with no distributed cache.

I’ve enabled wire logging on the two application servers. What this means is that every interaction over the network will be shown in the log in detail.

Running through the same example as before, we can see that both the first and second application servers are interacting with Stormpath over the Internet.



Now, let’s enable a distributed cache. For this example, we’ll use Hazelcast. Note: Stormpath is completely cache agnostic. Hazelcast, Redis and others are easy to drop in. The only requirement for a Spring or Spring Boot application is that the cache system supports the CacheManager interface.
In the application.properties, we’ll change:

stormpath.hazelcast.enabled = false


stormpath.hazelcast.enabled = true

To understand how this property is used, let’s take a look at the CacheConfig.java file:

@ConditionalOnProperty(name = {"stormpath.hazelcast.enabled"})
public class CacheConfig {
    public HazelcastInstance hazelcastInstance() {
        return Hazelcast.newHazelcastInstance();

    public CacheManager cacheManager() {
        // The Stormpath SDK knows to use the Spring CacheManager automatically
        return new HazelcastCacheManager(hazelcastInstance());

This takes advantage of some of the great Spring Boot magic, which is all about automatic configuration.

Line 1 identifies this class as a Configuration to Spring.

Line 2 is where the stormpath.hazelcast.enabled property comes into play. The Spring Boot @ConditionalOnProperty annotation ensures that this configuration will only be loaded if the stormpath.hazelcast.enabled is set to true.

Let’s look at this in action. As you can see below, Hazelcast automatically discovers nodes running on the same network. Each Tomcat instance fires up a Hazelcast node and then the nodes connect to each other.



Retrieving the Account in the first server triggers an interaction with Stormpath as expected.

Now, let’s see what happens when the Account object is retrieved in the second service this time:


This time, the Account comes from the cache, which Hazelcast has automatically distributed around to its nodes.

Using a distributed cache in conjunction service-to-service communication minimizes the number of round trips to Stormpath over the Internet.

Exception Handling for Consistent APIs

We want our Microservice to return an AccountsResponse object – no matter what. So, what happens if there’s an Exception? Spring Boot makes it easy to catch Exceptions and return whatever you want.

public @ResponseBody AccountsResponse unauthorized(UnauthorizedException ex) {
    return AccountsResponseBuilder.newInstance()

The @ExceptionHandler annotation is the hook that causes Spring Boot to associate the named Exception (UnauthorizedException in this case) with the method.

When an UnauthorizedException is thrown, the unauthorized method is entered. An AccountsResponse object is created and returned. And, it’s this object that will be returned from the original method.


Well, it’s been quite a journey!

In this post, we’ve explored how to:

  1. Enable communication between two Microservices
  2. Secure that communicaton using signed JWTs
  3. Optimize the traffic that goes over the wire using a distributed cache
  4. Handle Exceptions using Spring Boot’s @ExceptionHandler annotation

Feel free to drop a line over to email anytime.

Like what you see? to keep up with the latest releases.

January 21, 2016

KatasoftBuild Multi-Tenant .NET Applications [Technorati links]

January 21, 2016 10:46 PM

Designing multi-tenant applications can be tricky. The previous sentence may have been an understatement.

The ability to quickly spin up a new instance of your application is a powerful business case, but getting there involves serious engineering. Partitioning user data (and making sure it stays partitioned) is critical. A common use case involves treating a subdomain of your application URL as a tenant identifier (the acme in acme.yourapplication.com). Incoming requests are then examined and connected to the correct data source based on the subdomain.

Designing for multi-tenancy involves modeling your data stores from the start with multiple tenants in mind. Data security should also be a major consideration. Anyone else have a headache yet?

Have no fear! Alongside support for token-based authentication, the latest release of our .NET SDK also includes full support for the Organization resource, which can make developing multi-tenant applications a lot simpler.

Build Multi-Tenant .NET Applications with Organizations

The high-level overview of using Stormpath to build multi-tenant applications is covered in the Multi-Tenancy with Stormpath guide.

The Organization resource represents a tenant of your application, and is a top-level container for Stormpath Account Stores (Directories and Groups). In addition to logically organizing (and separating) Account Stores, it includes some special sauce for working with multi-tenant applications. The nameKey property must be unique across all of your Organizations, and must follow DNS hostname rules. This constraint specifically designed to support the “subdomain as a tenant identifier” use case.

The .NET SDK makes it easy to:

Create and Manage Organizations

Creating an Organization is a snap:

var rebels = client.Instantiate<IOrganization>()
    .SetName("Rebel Alliance")
    .SetNameKey("rebels") // must be unique and follow DNS hostname rules
    .SetDescription("The Rebellion Against the Empire"); // optional
await client.CreateOrganizationAsync(rebels);

Updating an existing Organization is also straightforward:

// organizationHref is the Stormpath href of the Organization
var rebelsOrg = await client.GetOrganizationAsync(organizationHref);

// Disable the Organization. Login attempts will be rejected.

// Save changes
await rebelsOrg.SaveAsync();

Add Directories and Groups to Organizations

Organizations represent top-level containers for Directories and Groups. Adding Account Stores to an Organization can be accomplished in a few ways:

// Add an existing IDirectory or IGroup instance:
await rebels.AddAccountStoreAsync(myGroupOrDirectory);

// Or, add by name:
await rebels.AddAccountStoreAsync("My Group Or Directory Name");

// Or, add by href:
await rebels.AddAccountStoreAsync(hrefOfGroupOrDirectory);

// Or, perform a lookup query that matches one Directory or Group:
await rebels.AddAccountStoreAsync<IDirectory>(
    dirs => dirs.Where(d => d.Name.StartsWith("My Dir")));

Just like Applications, an Organization can have a default Account Store and a default Group Store, which are the default locations for newly-created Accounts and Groups. If the default store is null, new Accounts or Groups cannot be created in the Organization. These properties can be updated with the appropriate methods:

// Get the default Account Store for an Organization
IAccountStore defaultAccountStore = await rebels.GetDefaultAccountStoreAsync();
// defaultAccountStore is an IDirectory or IGroup, or null

// Set the default Account Store
await rebels.SetDefaultAccountStoreAsync(myGroupOrDirectory);

// Get the default Group Store for an Organization
IAccountStore defaultGroupStore = await rebels.GetDefaultGroupStoreAsync();
// defaultGroupStore is an IDirectory, or null

// Set the default Group Store
await rebels.SetDefaultGroupStoreAsync(myDirectory);

Assigning an Organization to an Application

The Organization itself is an Account Store, so it can be assigned to an Application:

await myApp.AddAccountStoreAsync(rebels);

Adding an Organization to an Application will enable login for all of the Accounts contained in Directories and Groups assigned to the Organization, as well as any that are added in the future.

Authenticate Using Organizations

Stormpath handles login attempts using a specific login flow that iterates through all the Account Stores associated with an Application. In multi-tenant applications, you frequently need to restrict login to a specific tenant. This is possible by specifying an Organization (or Organization nameKey) during login:

// Using an IOrganization instance
var loginResult = await myApp.AuthenticateAccountAsync(
    request =>
        request.SetAccountStore(rebels); // restrict search to Rebel Alliance tenant only

// Using an Organization href string
var loginResult = await myApp.AuthenticateAccountAsync(
    request =>

// Using an Organization nameKey
// (This is handy if you're using a subdomain identifier
// that you can quickly pull out of the request URL!)
var loginResult = await myApp.AuthenticateAccountAsync(
    request =>
        request.SetAccountStore("rebels"); // Tenant with nameKey "rebels" must exist

That’s all there is to it! Using Organizations and the features available in the Stormpath .NET SDK can make the development of multi-tenant .NET web applications faster and easier.

Further Reading

For more reading, check out:

MythicsOracle Enterprise Manager 13c Review - More Tools for the IT Toolbox [Technorati links]

January 21, 2016 06:19 PM

As an early Christmas gift, Oracle released the latest major release of Enterprise Manager to the public. This new release continues to extend the features and…

Kaliya Hamlin - Identity WomanIts getting Meta – Identity for the Identity students [Technorati links]

January 21, 2016 02:47 PM

Today I picked up my University of Texas at Austin ID card. Yes I’m now a Texas Longhorn. The Center for Identity here is teaming up with the Information School to offer a Masters of Science in Identity Management and Security and I’m enrolled in the first cohort of students.

It's getting meta here with the @UTcenterforID and the new @UTasMSIMS we got out IDs today from UofT. pic.twitter.com/zXL7gxEZOc

— Kaliya-IdentityWoman (@IdentityWoman) January 21, 2016

I’m not moving to Texas though. Its a part time program, one weekend a month and I’m going to fly here to participate in classes.

Today we had an overview of the 10 classes that make up the program, learned about some of the different research happening at the iSchool and the Center for Identity.

I wasn’t expecting this but we got a full tour of the football stadium, saw a bunch of statues of players and coaches and learned all about the team and the lore.  It reminded me of the first week I had at Cal and the feeling I had that week knowing I would be a Golden Bear for life (I was a student athlete though so there was a whole other layer to the experience).

I will be blogging about what we are up to.



Mike Jones - MicrosoftSecond OAuth 2.0 Mix-Up Mitigation Draft [Technorati links]

January 21, 2016 06:25 AM

OAuth logoJohn Bradley and I collaborated to create the second OAuth 2.0 Mix-Up Mitigation draft. Changes were:

The specification is available at:

An HTML-formatted version is also available at:

January 20, 2016

Bob BlakleyThe Live Music Capital of the World [Technorati links]

January 20, 2016 07:17 AM
A slideshow.

Vittorio Bertocci - MicrosoftUnboxing of the Samsung S34E790C – 34” Curved Monitor [Technorati links]

January 20, 2016 07:06 AM

As anybody who wrote a book in their free time can tell you, after a full day of work sometimes it’s really hard to force yourself to spend the evening writing. Few months ago I found myself in a motivation crisis: to get out of it, I promised myself that if I would have turned in the manuscript on time I would buy one of those big-*** curved monitors.

Well, guess what! I did hit all the correct dates, hence I treated myself with a Samsung S34E790C, a 34” monster with a beautiful, barely curved screen. Today I found a huge box waiting for me Smile here there’s the unboxing, in case you are considering getting one. Warning: As you might imagine, this post is going to be pictures-heavy… and won’t talk about identity at all.

The Packaging

File Jan 19, 22 17 27

…is huge, cutter on top for scale, but in the end is just 15Kg. Can you imagine what monster box you would have needed for a 34” CRT? It would have had its own ZIP code Smile

File Jan 19, 22 21 57

Embedded in the styrofoam, in its own little niche, you find one box that neatly stores ALL cables, manuals and the like. NO need to hunt those down in a huge box! My inner OCD person is satisfied.

File Jan 19, 22 22 12

Is there anything they won’t do with soy these days?

File Jan 19, 22 22 25

Here there’s the content of the accessory box:

File Jan 19, 22 24 04

The adapter is on the big side, hand for scale, but the plug is delightfully angled and the cables are quite long. Thumbs up.

File Jan 19, 22 22 43

First peek at the monster. This thing is HUGE.

File Jan 19, 22 22 56

Incidentally, this package has the largest amount of silica gel packages I’ve ever seen. They must have known this was heading to the not-so-sunny Seattle.

File Jan 19, 22 23 10

And finally here it is, out of its box. No sticky tapes to wrestle with. Samsung has done an excellent job on this package.

The Monitor

File Jan 19, 22 23 24

Here it is, ready to be hooked up.

File Jan 19, 22 23 38

The back. Very simple. The left side of the pic shows the power button, which dubs as 4-way joystick/pad for navigating the onscreen menus. The right hand side shows the ports’ arsenal.

File Jan 19, 22 23 50

Power, 2 HDMIs, 1 DisplayPort, USB extender, audio, and 4 USB ports. Did I mention that this monitor also has decent speakers?

File Jan 19, 22 24 18

And finally, it’s ON! Here it shows the amazing horizontal resolution by stretching my desktop background. Long fish is long! I swear I did not change background just for showing this effect, I have been using this background for quite some time now – I am sure it must have been captured in some demo or session recording, you can check Smile

File Jan 19, 22 24 34

Just because I can, here there’s VS2015 with THREE vertical tab groups – opened on my little gender ratio estimator project (check it out here!). All perfectly legible. The 3K vertical lines of the Surface Book are just there as clipboard Smile

File Jan 19, 22 24 49

…And here there’s the feature that blows my mind. You can hook up two computers to this monitor, and show output for the two AT THE SAME TIME. Here my Surface Book is graciously sharing monitor space with my MacBook Pro.
This is not a gimmick, the real estate both get is perfectly workable. I can see myself switching to this mode while working on demos.


Well, I haven’t done any real work on this monitor yet… and back in box it goes now, given that I plan to bring it to work (BYOM). But for what I have seen so far, this is an amazing piece of hardware. Recommended! Smile

January 19, 2016

Vittorio Bertocci - MicrosoftEstimating Gender Diversity in your Organization with the Azure AD Graph [Technorati links]

January 19, 2016 03:52 PM


Your directory data holds a treasure trove of insights, which are now exceptionally easy to access thanks to the Graph API layer on top of it.

Few weeks ago I was wondering what question I could answer about my own organization with the Directory Graph alone, and I quickly landed on a great candidate: gender ratios.

I *love* working in our industry, but if there’s something I loathe is the dramatic gender imbalance that is almost an absolute invariant everywhere I look. In my past as a consultant I often worked for non-IT shops, where the ratio wasn’t as skewed, and loved the atmosphere – I can’t quite put my finger on it, but activities and collaboration seemed to take a on a more balanced quality… healthier is the adjective that comes to mind.

I know that the industry leaders are hard at work to correct this and many other imbalances, and the growing awareness of the problem gives me hope for the future. Still, I thought it would be fun to play with data and verify if some conjecture of mine were actually backed by numbers: are female managers more likely to lead orgs with more balanced gender ratios? Is it true that non technical disciplines have better ratios? Are there specific business functions where the ratios are reversed? And so on, and so forth.

It goes without saying that I will NOT be sharing any data whatsoever about Microsoft here. Besides the fact that the estimates might be wildly inaccurate, it is absolutely not my place to talk about the company. Microsoft does an excellent job in maintaining transparency on workforce demographic.

What I am going to share, however, is the source code and the methodology I used for running my little experiments. If you have Office365 or any other Microsoft cloud services, you can run the code in your organization and get back an estimate of the gender mix of the reports of any user of your choice. You just need to download the code, compile and run – that works even if you are not an administrator. For the time being you need Windows, but if there’s interest I might port the app to .NET Core – which would allow you to run on Mac and Linux as well.

Let me stress that I make no guarantees about the precision of the resulting estimates, not I guarantee that my code is bug-free. Please consider this simply as the chronicle of an afternoon spent geeking out with Azure AD, and my modest contribution for raising awareness around gender imbalance in tech.

Ready? Let’s dive!

The methodology

Where to begin? Let’s see. With the Directory Graph, I know I can crawl through the entire report structure of anybody. I can get the User object of the manager of the org I want to analyze, for example via his/her userPrincipalName: then I can recursively analyze all the User objects in the /directReports property of all subtrees. So I can get all the users in the entire sub-org – that part is easy.

How to tell the gender of a User? There is no Gender property readily available in the default schema.  The obvious alternative is… the user’s first name, naturally. We do have that, under the /GivenName property.

But wait, that’s not that simple! There are certain names that are male in some countries, and female in others. For example, Andrea is THE most common name for a boy in Italy. At the same time, it is a super common female name in Germany – in the 60s it was in the top 10. Hence, we better include country information in our estimate of gender from first name. The /country property exists in the Directory Graph, however it is often not populated. However there is another country-dependent property that is way more likely to be populated, and that’s telephoneNumber. Never mind that Americans often tend to omit the country code! Smile

Let’s take a step back and see where we’ve got so far. Our estimate of whether a User is male or female is going to be based on the User’s GivenName and Country(telephoneNumber). Sounds promising, but this is by no means perfect.

Even if we manage to obtain the country’s information from the User’s telephone, all we are getting is the country from where the user is operating. An Andrea working in USA might be a male migrant from Italy, or a female from 2nd generation German migrant family. Any estimate should take into account what cases are most prevalent for each name and given country, which brings us straight in the realm of frequencies (and saddles us with the task of finding a source of info for those figures).
Compound this with the fact that there are some names which stubbornly defy classification: Robin, Kim, Casey, Yi, Rama, Jamie and so on. Those cases, too, point to the need to base our estimates on probabilities rather than static classifications.

Here there’s the idea that made me feel very clever, at least for few minutes: what if I’d just crawl Facebook’s public data and build a database of first names per country, tracking the frequency with which users self-declare their gender? That would by no means carry any guarantee of significance, but it would definitely be an improvement over static analysis!

Hitting the internet for inspiration, I soon stumbled on https://genderize.io/ – an awesome public API that already did the crawling for us, across major social networks, and helpfully exposes its database through a super convenient API. The API offers a very generous daily allowance of 1000 free name queries per day. I wanted to do some heavy duty work (and a lot of debugging Smile) hence I bought the PLUS package (and I am now super worried about looking silly by leaking the key on GitHub!) but I am sure you can do a lot of experimentation with the free tier.

The API also offers a helpful probability associated to each gender estimate, plus the number of entries from which the estimate was extracted. That allows you to place confidence thresholds in your own evaluations if you so choose. Here there’s an example. Say that we want the estimate for Andrea in Italy. The call is very simple:

[sourcecode language='text' ]
GET https://api.genderize.io/?name=andrea&country_id=it

looks like the following:

[sourcecode language='javascript'  padlinenumbers='true']

That’s a pretty high estimate! That’s not a 1.0 probability given that we do have various Andrea from Germany or other countries living in Italy – I know a few myself. Let’s check Andrea in US tho:

[sourcecode language='javascript' ]

Yep. It looks like the “Andrea” in the US are mostly from countries where the name is female… or at least, that’s what people self-declare on social networks. Of course, for being precise we should also take into account any gender differences in social network usage… and compare the count to the total sample per country. But let’s not go too far Smile what we have now seems good enough for playing.

Before I move to describe the (simple) app that implements the above, I want to call out one last detail. Parsing phone numbers is a surprisingly complicated task, given that every country has its own formatting rules. Luckily there are a number of libraries that can help, the most famous probably being Google’libphonenumber. Patrick Mézard and Aidan Bebbington nicely ported it to C# and made it available as a NuGet, which I promptly used in the project. This NuGet is the main reason for which I did not write this directly in .NET Core – out of laziness, really. But, if there’s interest we can always course-correct and port it!

The app

The application is a simple console app, which can be found in this repo. At launch, it takes in input the userPrincipalName (which can be different from the email, beware)  of the manager whose org you want to analyze. If it’s the first time you launched the app (or it’s some time you don’t run it) you’ll get prompted for credentials – make sure you use an account that belongs to the directory you want to work with. I chose the console app format for 2 reasons:

– It can be modeled as a native client, which is automatically multitenant and can access the directory as the signed in user. That means that the app requires no setup in your directory, and does NOT require an admin to function. That’s pretty much equivalent to a user running an LDAP read query in a classic onprem AD. Modeling the app as a multitenant web app would have required admin consent for gaining directory read rights, which would have greatly limited the number of people that can run this analysis.

– It has no UX requirement, which makes it runnable on headless boxes – and above all, makes it easy to be ported to Linux and Mac via .NET Core. You are going to do your analysis in Excel anyway, hence even if I would have thrown in a couple of pie charts I would not have added much real value to the insights you can get from the textual output.

Once it gets a valid token, the app caches it for future uses (in a file, token.dat, that can be decrypted only on the machine it’s been generated on – but be careful with it anyway). Then it passes it to a factory for the class Account, a wrapper that uses the Graph API to retrieve the first name and telephone number (hence, the country) of the manager.

That done, the app calls the Account class method GetGenderMix – which

One thing to notice about GenderizeProxy is that it allows you to specify confidence thresholds, like ProbabilityThreshold (capping the confidence level beyond which an estimate will be considered indefinite instead of the proposed gender guess) and CountThreshold (establishing the minimun number of entries from social networks an estimate must be based on to be considered reliable). Playing with those thresholds can change the numbers pretty significantly, although in my experience the ratios often remain surprisingly stable.

Once GetGenderMix finished, the app spits out some generic results on the console, and saves a CSV file with all the gender mixes for all the managers in the org you analyzed. Just double click on it, and have a ball in Excel to find correlations and interesting numbers (hint: I found AVERAGEIF super useful).

Give it a Spin

As it should be abundantly clear at this point, this little app is by no means guaranteed to offer a precise assessment of the gender mix in your organization. It certainly doesn’t hold a candle to what your HR already knows with zero uncertainty margin, and always keeping that in mind is certainly a healthy thing.

That said, I *love* how empowering this thing is. As mentioned, I personally used it for verifying some theories I had – in some cases they did pan out, in some others I was surprised to be proven wrong, but the metapoint here is that they got me to think about the problem. I hope this will get you to think more deeply about gender inbalance too – and if you learn how to play with Azure AD in the process, I can’t say I’ll be disappointed Winking smile

Nat Sakimura死者のプライバシーを通じて考える、言論の自由、個人情報保護、人権、尊厳、プライバシー [Technorati links]

January 19, 2016 03:49 PM



Screen Shot 2016-01-20 at 00.00.11

[図1] 佐々木俊尚 Twitter.. https://twitter.com/sasakitoshinao/status/688556116226121728


死者のプライバシー 問題は、個人情報保護法の遵守とプライバシー保護の違いや、人権とプライバシーの間の溝を浮き立たせる良い話題ではある。同時に、今回のように被害者のエピソードなどがどんどん報道されていくような状況は、マスコミの言う「言論の自由」と「プライバシー」との関係を考えなおす良いきっかけにもなる。










[図2] 自観と他観とプライバシー

それを見ていた第三者は、死んでしまうと、生前築いた関係性が崩れてしまうと気づき、不安を感じ、不幸せになる可能性はあると思われる。一つのチャレンジとしては、これを守るということを法益として、人権規定から #死者のプライバシー を引き出せないかというのはあるのではないか。

もう一つの方法としては、遺言がなぜ認められるのかということの類似から引き出すというのもあるかもしれない。法学者の先生方には、上記のような観点なども含めて、 #死者のプライバシー 問題を少し考えていただきたいなぁと思う。





Copyright © 2016 @_Nat Zone All Rights Reserved.

Matthew Gertner - AllPeersEffective Tips for Finding the Right Home Security System [Technorati links]

January 19, 2016 03:47 PM

If you’re in the process of shopping for a home security system for your home, there are several focal points to evaluate. If you’re a homeowner with children, installing a state-of-the-art monitoring system might be something you’d be more inclined to pursue. If perhaps you have trained guard dogs, deadbolt locks on door/windows and motion-sensor lighting already invested into your home—installing a home security with basic features may be the savvier alternative.

Effective Tips for Finding the Right Home Security System

Whatever ‘security’ path you choose to take, it’s imperative that you analyze your current lifestyle to gain a better perspective for finding the right home security system.

Here are several proven suggestions for helping you find the home security system that best fits your lifestyle:

1.) Establish your wants and needs: Just about every home security system utilizes a different method of protection. Premier providers, such as ADT, maintain infrared technology and high-decibel siren technology; basic providers only offer gate and perimeter protection. Whether you seek an ultramodern system or prefer more traditional methods of security—it’s important to write down the goal and objectives you wish to pursue before settling on a purchase.

2.) Gain transparent third-party opinions: Sometimes the only way to obtain an authentic opinion is to ask friends and family about what they think about their home security system. Similar to all facets of sales, businesses will implement everything in their arsenal to hide negative content concerning their product. By receiving information thru someone you can wholeheartedly trust, you have the best resource for finding information without an objective. Ask them about their own experiences and whether or not their security system provides them with the peace of mind to sleep soundly at night.

3.) Survey the Internet: If you have yet to utilize the Internet, now is the time to start exploiting its positive features to help you find the right home security system. Research and evaluate customer reviews; use third-party resources like Yelp for finding readily information on consumer trails; additional evaluate each competing businesses’ pricing, discount opportunities, features, etc.; talk with a company representative and evaluate whether or not their customer service department is professional, patient and knowledgeable.

4.) Let companies prove themselves: In order to make a savvy purchasing decision, inquire that all serious candidates come to house for an in-house evaluation and sales presentation. Each security ‘contender’ should be methodically analyzed—making sure it provides the type of features you’re looking for. Looking out for improper sales tactics such as giving you a limited time offer, high-pressure buying opportunities, stating statistics regarding “recent crimes in the area,” tailored sales opportunities made only for you, etc.

In conclusion, you and your family’s livelihood are at stake, so it’s important to make a well-informed decision that covers each of these suggestions. Making the mistake of installing the wrong security system affects your peace of mind—so review your options, evaluate competitors, research reviews, consider your goals and find the provider that best fits your lifestyle.

Image: https://pixabay.com/en/security-word-lasers-modern-secure-574079/

The post Effective Tips for Finding the Right Home Security System appeared first on All Peers.

Mark Wilcox - OracleSecurity Roundup January 19 2016 [Technorati links]

January 19, 2016 03:09 PM
Here are my favorite security related links for the week. Remember you need to have a database security assessment. Contact me - mark dot wilcox at oracle dot com.

Matthew Gertner - AllPeersThe Best Travel Apps for 2016 [Technorati links]

January 19, 2016 02:06 PM

We are more and more reliant on our phones and tablets for ever aspect of our life so what makes traveling any different? Take a look at some of the most popular and best travel apps for 2016 and make sure you are ready to have the best possible time on your next trip.

travel apps for 2016

The search giant has long since ventured into a variety of industries and travel is no different. Google Flights allows you to “farecast” with the best of them. You can pick flexible dates and even flexible locations to ensure you get the lowest fares possible. Once you get your ticket, prepare for the language barrier with Google Translate. Google’s long standing translation service has been the butt of money of jokes but this tool has proven invaluable when traveling abroad. With current support for over 90 languages and the ability to use your phone’s camera to instant translate, unless you already speak the language you better pack your smart phone.

Get Your Guide App
Get Your Guide’s Travel app is a multipurpose travel app that excels at booking low cost, high fun adventures. Downloadable at https://www.getyourguide.com/apps/ and available for a variety of smart phone platforms this travel app allows you to browse, book, and review different excursions when traveling abroad. They also offer a price guarantee and easy, no paper, last minute adventure booking. The Get Your Guide App is essential for making sure you find the best things to do when traveling.

Uber has expanded their services to over 70 different countries worldwide. This includes travel destinations like Italy, Turkey, and Thailand. The travel app is available on all major smartphone platforms including iOS, Windows Phone, and Google Andriod and in a variety of different languages. A cool thing about the Uber app is that when in a foreign country the app’s language stays the same, so you can hail a car in Istanbul without knowing a single word of Turkish. In fact the Uber travel app works exactly the same in a foreign country as it does back home, including you settings and credit card details. So when your plans lands, reach for you phone, beat the lines and ride to your hotel in style and comfort.

So whether your goal is to save money on travel, squeeze every inch of excitement out of your trip, or deal with as few stresses as possible these travel apps will help you. There are hundreds more but these three are essential travel apps for getting the most out of you trip. So remember to download them before you go; and as always  don’t forgot your passport, keys, phone or international power phone charger.

The post The Best Travel Apps for 2016 appeared first on All Peers.

Anil JohnHow to Work on the Wildly Important while Walking in a Windstorm [Technorati links]

January 19, 2016 12:30 AM

I went from conflating activity and productivity to understanding that time is a finite resource best spent on the disciplined pursuit of the wildly important. It is simple, but not easy. I'll tell you the habits that are working for me and provide pointers to resources I found helpful on this journey so you don't have to waste your time.

How to Work on the Wildly Important while Walking in a Windstorm

Click here to continue reading. Or, better yet, subscribe via email and get my full posts and other exclusive content delivered to your inbox. It’s fast, free, and more convenient.

The opinions expressed here are my own and do not represent my employer’s view in any way.

January 18, 2016

KatasoftNew! SAML Support For Your Customer Apps [Technorati links]

January 18, 2016 04:00 PM

Support SAML in your application with Stormpath

Integrate with Popular SAML Identity Providers in Minutes.

Today we launched support for the SAML standard for authentication and user management.

Applications that use Stormpath for user management will now be able to use popular identity providers (IdPs) for Single Sign-On (SSO) capability. In other words, Stormpath-backed apps are now SAML service providers that work with SAML services like OneLogin, Okta, Salesforce or any other SAML IdP, including home-grown and open source options.

SAML support is one of the features most frequently requested by our customers, and it raises the bar on the enterprise-readiness of the Stormpath service. With multi-tenant customer partitioning, LDAP/AD integration, social login, high scalability and availability, Stormpath can fully support identity in an enterprise service or cloud platform. B2B applications that require multi-tenancy or have customers using different IdPs will save a great deal of time (and frustration) with this feature. User-initiated Single Sign-On also lets you build applications that provide a seamless end-user experience.

And if you want to skip reading, Sign Up for our free Webinar, “No-Code SAML Support with Stormpath” on Thursday January 21st, 10am PT / 1pm ET.

Let’s dive in!

First, What Is SAML?

SAML (Security Assertion Markup Language) is an XML-based standard for securely exchanging authentication and authorization information between entities—specifically between identity providers, service providers, and users. Well-known IdPs include Salesforce, Okta, OneLogin, Shibboleth. Your apps are the SAML service providers, and the Stormpath API makes it possible to integrate them with the IdPs (but without headaches).

Stormpath SAML Service Provider Support

Our initial support for SAML includes an update to the Stormpath REST API, along with our SDKs (see Resources below). Instead of working with XML or even directly with SAML itself, Stormpath allows you to set up SAML consumption by just adding some configuration to our SDK and the Stormpath console. From there, your applications can consume SAML assertions from any SAML IdP.

Now you can create applications that deliver a unified and seamless SSO experience for end users, without any custom code. Stormpath-backed applications can now authenticate users without requiring a separate login. Like all features at Stormpath, SAML support comes with pre-built customer screens and workflows through ID SIte.

Easy SAML Consumption from Any IdP

With Stormpath your application can support multiple IdPs, so you can connect your application to separate userstores, e.g. Okta, Salesforce, and Shibboleth with just a little configuration. This makes it easy to meet your customer requirements: if one customer needs your app to connect to their home-grown IdP, and another uses Ping Identity, Stormpath SAML makes the integration easy for your applications and developers.

Configuration-based attribute mapping enables seamless support for different identity providers, allowing them to assert account attributes into your application. For example, if one IdP says that variable firstName=Tom and another IdP says fn=Tom, you can use Stormpath to map both to a variable called givenName within your application.

Multi-Tenant Customer Data Are Built In

Many Stormpath-backed applications use our robust Authorization functionality to partition customer organizations in their SaaS application, and this release makes SAML very accessible for those applications.

Typically, SAML implementations require a separate instance for each identity provider. Stormpath’s approach to SAML support is free from this constraint, giving you the flexibility to support diverse IdPs for different customer organizations within the same instance of your application. This also makes it easy to achieve customer compliance and privacy requirements, without a lot of operational overhead to also support SAML.

SP-Initiated Login Flows or IdP-Initiated Flows

Stormpath SAML support also provides flexibility in the point of entry for authentication. End users can access the IdP portal first and then be automatically authenticated for the Stormpath-backed application. Or they can enter through the Stormpath-backed application and automatically be authenticated for all the apps attached to the IdP as well.

Stormpath SAML Resources

We will be publishing a number of tutorials and demos for SAML support in our SDKs in the coming weeks.

January 15, 2016

Julian BondStar Wars VII - Here be some small spoilers. Look away now. [Technorati links]

January 15, 2016 01:45 PM

Julian BondMigrant Architecture [Technorati links]

January 15, 2016 07:39 AM
Migrant refugee camps seen as a city-building, architecture problem. Every so often Bruce Sterling finds this stuff and this is one of those things that spins off ideas.

In several places around the world, but especially in the Middle East and Europe we've got an ongoing migration problem as war or water or climate change or food shortages force people to move to say alive. Traditionally we treat this as a temporary crisis and set up temporary camps with tight controls. But these camps aren't really temporary with it being common for people to be stuck in them for 15 years or more. The people inside them start doing city building and building a camp society regardless of how we attempt to control it. So the big idea is to place the camps in places that need repopulation and encourage the refugees to use free enterprise to build a new city there. Examples might be the empty southern central Italy, and especially the empty new towns and building projects in Spain. But this also applies to places in Central Germany as well.

The question is how much infrastructure, law and order and control we have to provide to kick start the process. The infrastructure is not just food/water/housing. Modern migrants have cellphones so electricity/cellphone coverage/internet is important as well. Perhaps the Migrant City should be a temporary autonomous zone or free port. Does that mean an "Escape from New York" compound with high walls? There's the possibility of experiments in new forms of social organisation here. Then there's the jobs problem. The ideal locations for re-population are often empty because there's no work. That's certainly true of Italy/Spain but less so for Eastern Europe. If this is a permanent rather than temporary city then the occupants need to fairly quickly move to generating wealth not just consuming it. What happens to guaranteed basic income or benefits for the migrants? How quickly do they get citizenship of the regional, national and super-national sructures where it's located?

It's good to see architects being interested in this as part of a long tradition from Wren to Corbusier. Both on the macro and micro scale from city layout planning to IKEA flat pack housing. The camps may start as rigid lines of tents but the residents quickly start modifying it. Which then leads to Favela Chic and the kind of (semi)functional chaos of Sao Paolo or the townships of S Africa. Should this be encouraged or discouraged? ather than try and control it, perhaps it would be better to have an orientation point that hands out the essentials but then to let the city self organise.

Finally there's the problem of land ownership. The whole of the western world is now owned. To make this work a space has to be cleared, presumably by government, for the Migrant City to be placed in. Does that mean compensating the current owners of the land in some way? Or do they get to charge rent?

How does this vary round the world? From S to N America. Europe compared with Africa. China compared with Siberia.

And all that starts with a simple idea. Refugee Camps aren't temporary and they shouldn't be.


Via one of Bruce Sterling's tumblrs

btw. That photo really reminds me of the really big festivals like Glastonbury or Burning Man. Camps should include entertainment, art and music. And no, Glastonbury and Burning Man are not preparation for finding yourself as a refugee!

The IKEA refugee shelter is an inspiring story.


(apologies fr the daily mail link. ;) It's there for the 10,000 figure. )

The better shelter has made it to bOingbOing
January 14, 2016

OpenID.netLeaders Lead [Technorati links]

January 14, 2016 09:52 PM

The inaugural meeting of the iGov Working Group took place on Wednesday, January 14th where three co-chairs were elected by acclamation. John Bradley of Ping Identity, Paul Grassi of the US NIST and Adam Cooper of the UK Cabinet Office Identity Assurance Program are the elected co-chairs. Acclamation may be a bit strong describing an electoral process closer to being shanghaied. All the same, all of us know leadership is a classic  key success factor.

However leaders emerge, they are essential to success especially in the “sausage making” of standards development. The configuration of iGOV’s leadership is intentional. The leaders map onto the WG’s mission: John’s Chilean/Canadian identity together with his unique technical chops; together with Paul Grassi’s past pedigree and present position in the US Government; together with Adam Cooper’s architectural expertise than stretches into European standards and schemes form iGOV’s leadership team. 

Leaders lead and we look to these men to manage the process and lead work group contributors to a common goal. Please consider joining this effort. The work group’s goal is to have a common deployment profile that can be customized for the needs of both pubic and private sector deployments in multiple jurisdictions that may require the higher levels of security and privacy protections that OpenID Connect currently supports. The resulting profile’s goal is to enable users to authenticate and share consented attribute information with public sector services across the globe. 

The full draft charter is available at http://openid.net/igov-wg-draft-charter/

MythicsMythics Launches Oracle Database, Java and Process Cloud Rapid Success Solutions [Technorati links]

January 14, 2016 05:43 PM

Mythics is releasing three new Oracle Cloud Solutions today that address specific needs – needs…

Julian BondWhat we do now will stick around for 400,000 years. Flex your muscles, mankind. Doesn't that give you... [Technorati links]

January 14, 2016 09:12 AM
What we do now will stick around for 400,000 years. Flex your muscles, mankind. Doesn't that give you a sense of ... Power!

SeeAlso: Hot Earth Dreams - Frank Landis. http://www.amazon.co.uk/Hot-Earth-Dreams-climate-happens/dp/1517799392/ref=sr_1_1
 Fossil fuel burning 'postponing next ice age' »
Climate change is altering global cycles to such an extent that the next ice age has been delayed for at least 100,000 years, according to new research identifying Earth’s deep-freeze tipping point

[from: Google+ Posts]