March 27, 2015

Julian BondAvast thah, me hearties! [Technorati links]

March 27, 2015 09:22 AM
Avast thah, me hearties!

Google have a new auto-proxy service that speeds up unencrypted web pages by compressing them on Google's proxy servers between the website and your device.
https://support.google.com/chrome/answer/2392284?p=data_saver_on&rd=1

This has an unintended but hilarious side effect. There's a bunch of websites that are blocked by UK ISPs for copyright issues. So if you go to, for instance, http://newalbumreleases.net/ you will normally be blocked by a Virgin Media/BT/TalkTalk warning message. But if you have Google's data saving Chrome extension installed it acts like a VPN and side steps the block.

Then there's https://thepiratebay.se/ They've sucessfully implemented an https:// scheme that also side steps the same UK ISP block.

For the moment, we're saved. But stand by to repel boarders!
 Reduce data usage with Chrome’s Data Saver - Chrome Help »

[from: Google+ Posts]
March 26, 2015

Paul MadsenNAPPS - a rainbow of flavours [Technorati links]

March 26, 2015 08:24 PM
Below is an arguably unnecessarily vibrant swimlane of the proposed (Native Appplications) NAPPS  flow for an enterprise built native application calling an on-prem API.

The very bottom arrow of the flow (that from Ent_App to Ent_RS) is the actual API call that, if successful will return the business data back to the native app. That call is what we are trying to enable (with all the rainbow hued exchanges above)

As per normal OAuth, the native application authenticates to the RS/API by including an access token (AT). Also show is the possibility of the native application demonstrating proof of possession for that token but I'll not touch on that here other than to say the corresponding spec work is underway).

What differs in a NAPPS flow is how the native application obtains that access token. Rather than the app itself taking the user through an authentication & authorization flow (typically via the system browser), the app gets its access token via the efforts of an on-device 'Token Agent' (TA). 

Rather than requesting an access token of a network Authorization Service (as in OAuth or Connect), the app logically makes its request of the TA - as labelled below as 'code Request + PKSE'. Upon receiving such a request from an app, the TA will endeavour to obtain from the Ent_AS an access token for the native app. This step is shown in green below. The TA uses a token it had previously obtained from the AS in order to obtain a new token for the app. 

In fact, what the TA obtains is not the access token itself, but an identity token (as defined by Connect) that can be exchanged by the app for the more fundamental access token - as shown in pink below. While this may seem like an unnecessary step, it actually

  1. mirrors how normal OAuth works, in which the native app obtains an authz code and then exchanges that for the access token (this having some desirable security characteristics)
  2. allows the same pattern to be used for a SaaS app, ie one whether there is another AS in the mix and we need a means to federate identities across the policy domains. 




When I previously wrote 'TA uses a token it had previously obtained from the AS', I was referring to the flow coloured in light blue above. This is a pretty generic OAuth flow , the only novelty is the introduction of the PKSE mechanism to protect against a malicious app stealing tokens by sitting on the app's custom URL scheme.


Kantara InitiativeUMA V1.0 Approved as Kantara Recommendation [Technorati links]

March 26, 2015 05:36 PM

Congratulations to the UMA Work Group on this milestone!

The User-Managed Access (UMA) Version 1.0 specifications have been finalized as Kantara Initiative Recommendations, the highest level of technical standardization Kantara Initiative can award. UMA has been developed over the last several years by industry leaders in our UMA Work Group.

The main spec is officially known as User-Managed Access (UMA) Profile of OAuth 2.0 but is colloquially known as UMA Core. UMA Core defines how resource owners can control protected-resource access by clients operated by arbitrary requesting parties, where the resources reside on any number of resource servers, and where a centralized authorization server governs access based on resource owner policies.

UMA Core calls several other specs by reference, but only one referenced spec is currently a product of the UMA WG. Officially known as OAuth 2.0 Resource Set Registration but colloquially known as RSR, this spec defines a resource set registration mechanism between an OAuth 2.0 authorization server and resource server. The resource server registers information about the semantics and discovery properties of its resources with the authorization server. The RSR mechanism is useful not just for UMA, but also potentially for OpenID Connect and plain OAuth use cases as well.

March 25, 2015

GluuUMA 1.0 Approved by Unanimous Vote! [Technorati links]

March 25, 2015 06:08 PM

yay-penguins

This week voting member organizations at the Kantara Initiative unanimously approved the User Managed Access (UMA) 1.0 specification, a new standard profile of OAuth2 for delegated web authorization. More than half of the member organizations were accounted for on the vote to reach quorum and provide the support needed for approval.

The unanimous approval of UMA 1.0 marks a major milestone in the advancement and adoption of open web standards for security. In conjunction with OAuth2 and OpenID Connect, UMA provides an open and inter-operable foundation for web, mobile, and IoT security that has until now only been possible to achieve through proprietary vendor APIs.

UMA offers individuals and organizations an unprecedented level of control over data sharing and resource access, and the unanimous approval of the specification signifies that UMA is well positioned for large scale adoption on the Internet.

UMA enables web and API servers to delegate policy evaluation to a central policy decision point. An UMA authorization server can evaluate any policies to determine whether to grant access to an API to a certain client or person. The type of authentication used by the person, the geolocation of the request, the time of day, and the score of a fraud detection algorithm are all examples of data that can be considered before access is granted.

A centralized UMA authorization server (like the Gluu Server) can leverage OpenID Connect for client and person authentication. UMA is in fact complimentary to OpenID Connect, and enables a feature known as “trust elevation” or “stepped-up” authentication.

The Gluu Server will be updated to support UMA 1.0 in release 2.2, expected in time for the RSA Security Conference when the UMA standard is finalized.

For more information and a list of UMA implementations, visit the Kantara UMA page.

Ben Laurie - Apache / The Bunkercheap jordans for sale 1jS5 2015325 [Technorati links]

March 25, 2015 04:42 PM

heart, everyone knows that, today, things can not be good,cheap jordans for sale, I am afraid of. Soul jade face,cheap Authentic jordans, when Xiao Yan threw three words, and finally is completely chill down, he stared at the latter, after a moment, slowly nodded his head and said: ‘So it would be only First you can kill ah. ‘
‘Boom’
accompanied soul jade pronunciation last fall, as well as the soul of the family all day demon Phoenix family strong, almost invariably, the body of a grudge unreserved broke out, stature flash, that is, people will go far Hsiao round siege.
soul to see family and demon days while Phoenix family hands, smoked child, who is gradually cold cheek,Cheap Jordans, a step forward,cheap jordans, the body of a grudge, running into the sky.
‘soul jade, you really want to cause war between ancient tribe of ethnic fragmentation and soul?’ Gu Qingyang cold shouted.
‘Hey’ war? I am the soul of the family, may have never been afraid of you ancient tribe, so you tranquility so long, it is only to give you?Fills a little more time, I really think you can not move the soul of family fragmentation? ‘Heard that the soul is starting jade face is a blur shadow smile’ immediately turned to awe-inspiring Xiao Yan, said: ‘You are the most recent name first,jordan shoes for sale, in my soul, but does not carry a small family, even the four will always revere missed a while back, when I progenitor is said to be hands on early hands-on, but why those old guys seem very concerned about, and that makes you have to live up to now, but I think this should also coming to an end. ‘
voice down, rich black vindictive,Coach Outlet, self-soul jade suddenly overwhelming storm surge out of the body, a Unit of cold wave, since the body constantly open to diffuse.
feel the TV drama filled body and soul jade majestic open fluctuations on smoked children, who also appeared on the cheek dignified

Ben Laurie - Apache / The BunkerCheap Jordan Shoes 1fF6 2015325 [Technorati links]

March 25, 2015 04:41 PM

body paint on the ground slippery ten meters, just stop, just stop its stature, two He is rushed to the guard house,Cheap Jordan Shoes, grabbed him, severely The throw back.
‘give you a chance to say,cheap jordans for sale, I can let you go.’ He and everyone on the main palm Xiupao swabbing a bit faint.
‘I’ve said, this is my income in among the mountains of Warcraft.’ Card Gang pale, mouth blood constantly emerge, his body lying on the ground, raised his head, staring eyes tightly He Lord every family , tough road.
swabbing hands slowly stopped, gradually being replaced by a ghastly He and everyone on the main surface, slowly down the steps, after a moment, come to the front of the card post, indifferent eyes looked moribund post cards, mouth emerged grinning touch,luckythechildrensbook.com, soon feet aloft,Cheap Jordans Outlet, then the head is facing the harsh post card stamp down, watch the momentum, if it was stepped on, I am afraid that the post of head of the card will be landing in Lima as burst open like a watermelon .
looking at this scene, on the square suddenly screams rang out round after round.
heard screams around, that He even more ferocious mouth the Lord every family, but on his head away from the post card only Cunxu distance,Coach Outlet Store, a dull sound, but it is quietly sounded at its feet on the square, and its feet, is at this moment suddenly solidified.
‘This foot down, you will you use your head to replace it,’ seven hundred and nineteenth chapter helping hand
seven hundred and nineteenth chapter assistance
slowly dull sound echoed on the training field. So that was all screams are down at the moment solidified,Cheap Jordan Shoes, all everyone is looking around, eyes filled with all kinds of emotions.
in that voice sounded grabbing, He is the Lord every family looking slightly changed, in other words so that was his share of rude Lengheng cry, but it is the foot

Ben Laurie - Apache / The Bunkercheap jordans for sale 2dW1 2015325 [Technorati links]

March 25, 2015 04:40 PM

obtaining the body shocked,cheap jordans for sale, arm place, it is faintly heard between Ma Xia Bi feeling.
‘Damn, this puppet refining what is? physical force actually so horrible!’ arm uploaded to feel pain, Shen Yun also could not help but thrown touch the hearts of dismay. ‘Xiao Yan,Cheap Retro Air Jordan, brisk walking, do not delay, and then later on too late!’ Xiao Yan demon puppet manipulated from time to speed up the attack, Han pool that urgent voice came again quietly.
hear Korean pool reminder, Xiao Yan is slightly shook his head, he could feel that he has been one filled with awe-inspiring atmosphere intended to kill the lock, even now turned around and left, will be caught up quickly.
mans eye blinking, Xiao Yan palm grip suddenly facing a giant pit,www.lindsaywalden.com, a certain attraction storm surge, flood is established directly pick up the body, and finally grabbed,jordan shoes for sale, slightly spy, immediately sneer a cry, and said: ‘You Hard life touches so also can not kill you, but Ye Hao. ” coma Hung Li Xiao Yan said seems to hear, eyes and shook it, you want to open, but a serious injury, but it is so He eventually obtained a waiver can only be futile.
when Xiao Yan Li Hung caught into the hands of a sharp breaking wind sound, suddenly resounded in the sky, and soon a vague figure, like lightning, facing day to swallow Taiwan crazy shot, that kind of diffuse from the body and out of the dark intention to kill, even all the way across, are able to feel it. ‘Boy, put Hong Li, otherwise die!’
far, that silhouette is seen Xiao Yan hands of the person arrested, the moment an angry roar, came crashing again.
heard this roar, it is struggling to support Shen Yun is a happy heart,Coach Factory Outlet, Yipiao corner of my eye, and she is seen Tian Xiao Hong figure,Coach Outlet Store, the moment anxious shouted: ‘Hung old guy, you flood the home of the people, all dead In

Christopher Allen - Alacrity10 Design Principles for Governing the Commons [Technorati links]

March 25, 2015 03:55 AM

Resource SharingIn 2009, Elinor Ostrom received the Nobel Prize in Economics for her “analysis of economic governance, especially the commons.

Since then I've seen a number of different versions of her list of the 8 principles for effectively managing against the tragedy of the commons. However, I've found her original words — as well as many adaptions I've seen since — to be not very accessible. Also, since the original release of the list of 8 principles there has been some research resulting in updates and clarifications to her original list.

This last weekend, at two very different events — one on the future of working and the other on the future of the block chain (used by technologies like bitcoin) — I wanted to share these principles. However, I was unable to effectively articulate them.

So I decided to take an afternoon to re-read the original, as well as more contemporary adaptions, to see if I could summarize this into a list of effective design principles. I also wanted to generalize them for broader use as I often apply them to everything from how to manage an online community to how a business should function with competitors.

I ended up with 10 principles, each beginning with a verb-oriented commandment, followed by the design principle itself.

  1. DEFINE BOUNDARIES: There are clearly defined boundaries around the common resources of a system from the larger environment.
  2. DEFINE LEGITIMATE USERS: There is a clearly defined community of legitimate users of those resources.
  3. ADAPT LOCALLY: Rules for use of resources are adapted to local needs and conditions.
  4. DECIDE INCLUSIVELY: Those using resources are included in decision making.
  5. MONITOR EFFECTIVELY: There exists effective monitoring of the system by accountable monitors.
  6. SHARE KNOWLEDGE: All parties share knowledge of local conditions of the system.
  7. HOLD ACCOUNTABLE: Have graduated sanctions for those who violate community rules.
  8. OFFER MEDIATION: Offer cheap and easy access to confict resolution.
  9. GOVERN LOCALLY: Community self-determination is recognized by higher-level authorities.
  10. DON'T EXTERNALIZE COSTS: Resource systems embedded in other resource systems are organized in and accountable to multiple layers of nested communities.

I welcome your thoughts on ways to improve on this summarized list. In particular, in #10 I'd like to find a better way to express its complexity (the original is even more obtuse).

March 24, 2015

Kantara InitiativeKantara Initiative grants Scott S. Perry CPA, PLLC Accredited Assessor Trustmark at Assurance Levels 1, 2, 3 and 4 [Technorati links]

March 24, 2015 06:20 PM

PISCATAWAY, NJ– (24 March, 2015) – Kantara Initiative is proud to announce that Scott S. Perry CPA, PLLC is now a Kantara-Accredited Assessor with the ability to perform Kantara Service Assessments at Assurance Levels 1, 2, 3 and 4. Scott S. Perry CPA, PLLC is approved to perform Kantara Assessments in the jurisdictions of USA, Canada and Worldwide.

Joni Brennan, Kantara Executive Director said, “Kantara Initiative is dedicated to enabling verified trust in identity services via our Credential Service Provider Approval Program. We are pleased to welcome Scott S. Perry CPA, PLLC as a new Kantara-Accredited Assessor.” View our growing list of Kantara-Accredited Assessors and Approved Services: https://kantarainitiative.org/trust-registry/ktr-status-list/

Scott Perry, Principal at Scott S. Perry CPA, PLLC said, “We view our accreditation to perform Kantara Assessments as a key milestone in extending our digital trust services to ICAM and industry Trust Services Providers. We’re proud to be the only CPA firm to offer Kantara Assessments.”

A global organization, Kantara Initiative Accredits Assessors, Approves Credential and Component Service Providers (CSPs) at Levels of Assurance 1, 2 and 3 to issue and manage trusted credentials for ICAM and industry Trust Framework ecosystems.

Kantara Initiative has it in its mission to further harmonize and extend the Identity Assurance program to address multiple industries and international jurisdictions. Kantara Initiative is already approved as a US Federal Trust Framework Provider.

The key benefits of participation Kantara Initiative Trust Framework Program participation include: rapid on boarding of partners and customers, interoperability of technical and policy deployments, an enhanced user experience, competition and collaboration with industry peers. The Kantara Initiative Trust Framework Program drives toward modular, agile, portable, and scalable assurance to connect business, governments, customers, and citizens. Join Kantara Initiative now to participate in the leading edge of trusted identity innovation development. For further information or to accelerate your business by becoming Kantara Accredited or Approved contact secretariat@kantarainitaitive.org  

About Kantara Initiative

Kantara Initiative is an industry and community organization that enables trust in identity services through our compliance programs, requirements development, and information sharing among communities including: industry, research & education, government agencies and international stakeholders. http://www.kantarainitiative.org  

About Scott S. Perry CPA, PLLC

Scott S. Perry CPA, PLLC is a Registered CPA Firm specializing in Technology Audits. The Firm is a global leader in Public Key Infrastructure (PKI), Service Organization Controls (SOC), WebTrust, Kantara, ISO 27001, and Sarbanes-Oxley (SOX) Audits. To learn more of the Firm’s services and qualifications, please visit www.scottperrycpa.com

Radovan Semančík - nLightComparing Disasters [Technorati links]

March 24, 2015 06:09 PM

A month ago I have described my disappointment with OpenAM. My rant obviously attracted some attention in one way or another. But perhaps the best reaction came from Bill Nelson. Bill does not agree with me. Quite the contrary. And he has some good points that I can somehow agree with. But I cannot agree with everything that Bill points out and I still think that OpenAM is a bad product. I'm not going to discuss each and every point of Bill's blog. I would summarize it like this: if you build on shabby foundation your house will inevitably turn to rubble sooner or later. If a software system cannot be efficiently refactored it is as good as dead.

However this is not what I wanted to write about. There is something much more important than arguing about the age of OpenAM code. I believe that OpenAM is a disaster. But it is an open source disaster. Even if it is bad I was able to fix it and make it work. It was not easy and it consumed some time and money. But it is still better than my usual experience with the support of closed-source software vendors. Therefore I believe that any closed-source AM system is inherently worse than OpenAM. Why is that, you ask?

Firstly, I was able to fix OpenAM by just looking at the source code. Without any help from ForgeRock. Nobody can do this for closed source system. Except the vendor. Running system is extremely difficult to replace. Vendors know that. The vendor can ask for an unreasonable sum of money even for a trivial fix. Once the system is up and running the customer is trapped. Locked in. No easy way out. Maybe some of the vendors will be really nice and they won't abuse this situation. But I would not bet a penny on that.

Secondly, what are the chances of choosing a good product in the first place? Anybody can have a look at the source code and see what OpenAM really is before committing any money to deploy it. But if you are considering a closed-source product you won't be able to do that. The chances are that the product you choose is even worse. You simply do not know. And what is even worse is that you do not have any realistic chance to find it out until it is too late and there is no way out. I would like to believe that all software vendors are honest and that all glossy brochures tell the truth. But I simply know that this is not the case...

Thirdly, you may be tempted to follow the "independent" product reviews. But there is a danger in getting advice from someone who benefits from cooperation with the software vendors. I cannot speak about the whole industry as I'm obviously not omniscient. But at least some major analysts seem to use evaluation methodologies that are not entirely transparent. And there might be a lot of motivations at play. Perhaps the only way to be sure that the results are sound is to review the methodology. But there is a problem. The analysts are usually not publishing details about the methodologies. Therefore what is the real value of the reports that the analysts distribute? How reliable are they?

This is not really about whether product X is better than product Y. I believe that this is an inherent limitation of the closed-source software industry. The risk of choosing inadequate product is just too high as the customers are not allowed to access the data that are essential to make a good decision. I believe in this: the vendor that has a good product does not need to hide anything from the customers. So there is no problem for such a vendor to go open source. If the vendor does not go open source then it is possible (maybe even likely) that there is something he needs to hide from the customers. I recommend to avoid such vendors.

It will be the binaries built from the source code that will actually run in your environment. Not the analyst charts, not the pitch of the salesmen, not even the glossy brochures. The source code is only thing that really matters. The only thing that is certain to tell the truth. If you cannot see the source code then run away. You will probably save a huge amount of money.

(Reposted from https://www.evolveum.com/comparing-disasters/)

Vittorio Bertocci - MicrosoftIdentity Libraries: Status as of 03/23/2015 [Technorati links]

March 24, 2015 06:18 AM

image

Time for another update to the libraries megadiagram! If you are nostalgic, you can find the old one here.

So, what’s new? Well, we added an entire new target platform, .NET core and associated ASP.NET vNext – which resulted in a new drop of ADAL .NET 3.x and new OpenId Connect/OAuth2 middlewares (see the end of this guest post I wrote on the web developer tools team blog).

Fuuuun Smile

Gerry Beuchelt - MITRECI and CND – Revisited [Technorati links]

March 24, 2015 12:54 AM
About this time last year I discussed my thoughts on Counterintelligence (CI) and Computer Network Defense (CND). My basic proposition then was that CND is materially identical (or – more precisely – a monomorphism) to a restriction of CI to Cyber activities. I think that I was way to hesitant in making this claim. After Continue Reading →
March 23, 2015

Bill Nelson - Easy IdentityHacking OpenAM – An Open Response to Radovan Semancik [Technorati links]

March 23, 2015 06:06 PM

 

I have been working with Sun, Oracle and ForgeRock products for some time now and am always looking for new and interesting topics that pertain to theirs and other open source identity products.  When Google alerted me to the following blog posting, I just couldn’t resist:

Hacking OpenAM, Level: Nightmare

Radovan Semancik | February 25, 2015

There were two things in the alert that caught my attention.  The first was the title and the obvious implications that it contained and the second is the author of the blog and the fact that he’s associated with Evolveum, a ForgeRock OpenIDM competitor.

The identity community is relatively small and I have read many of Radovan’s postings in the past.  We share a few of the same mailing lists and I have seen his questions/comments come up in those forums from time to time.  I have never met Radovan in person, but I believe we are probably more alike than different.  We share a common lineage; both being successful Sun identity integrators.  We both agree that open source identity is preferable to closed source solutions.  And it seems that we both share many of the same concerns over Internet privacy.  So when I saw this posting, I had to find out what Radovan had discovered that I must have missed over the past 15 years in working with these products.  After reading his blog posting, however, I do not share his same concerns nor do I come to the same conclusions. In addition, there are several inaccuracies in the blog that could easily be misinterpreted and are being used to spread fear, uncertainty, and doubt around OpenAM.

What follows are my responses to each of Radovan’s concerns regarding OpenAM. These are based on my experiences of working with the product for over 15 years and as Radovan aptly said, “your mileage may vary.”

In the blog Radovan comments “OpenAM is formally Java 6. Which is a problem in itself. Java 6 does not have any public updates for almost two years.”

ForgeRock is not stuck with Java 6.  In fact, OpenAM 12 supports Java 7 and Java 8.  I have personally worked for governmental agencies that simply cannot upgrade their Java version for one reason or another.  ForgeRock must make their products both forward looking as well as backward compatible in order to support their vast customer base.

In the blog Radovan comments “OpenAM also does not have any documents describing the system architecture from a developers point of view.”


I agree with Radovan that early versions of the documentation were limited.  As with any startup, documentation is one of the things that suffers during the initial phases, but over the past couple of years, this has flipped.  Due to the efforts of the ForgeRock documentation team I now find most of my questions answered in the ForgeRock documentation.  In addition, ForgeRock is a commercial open source company, so they do not make all high value documents publicly available.  This is part of the ForgeRock value proposition for subscription customers.

In the blog Radovan comments “OpenAM is huge. It consists of approx. 2 million lines of source code. It is also quite complicated. There is some component structure. But it does not make much sense on the first sight.”


I believe that Radovan is confusing the open source trunk with commercial open source product.  Simply put, ForgeRock does not include all code from the trunk in the OpenAM commercial offering.  As an example the extensions directory, which is not part of the product, has almost 1000 Java files in it.

More importantly, you need to be careful in attempting to judge functionality, quality, and security based solely on the number of lines of code in any product.  When I worked at AT&T, I was part of a development team responsible for way more than 2M lines of code.  My personal area of responsibility was directly related to approximately 250K lines of code that I knew inside and out.  A sales rep could ask me a question regarding a particular feature or issue and I could envision the file, module, and even where in the code the question pertained (other developers can relate to this).  Oh, and this code was rock solid.

In the blog Radovan comments that the “bulk of the OpenAM code is still efficiently Java 1.4 or even older.”


Is this really a concern?  During the initial stages of my career as a software developer, my mentor beat into my head the following mantra:

If it ain’t broke, don’t fix it!

I didn’t always agree with my mentor, but I was reminded of this lesson each time I introduced bugs into code that I was simply trying to make better.  Almost 25 years later this motto has stuck with me but over time I have modified it to be:

If it ain’t broke, don’t fix it, unless there is a damn good reason to do so!

It has been my experience that ForgeRock follows a mantra similar to my modified version.  When they decide to refactor the code, they do so based on customer or market demand not just because there are newer ways to do it.  If the old way works, performance is not limited, and security is not endangered, then why change it.   Based on my experience with closed-source vendors, this is exactly what they do; their source code, however, is hidden so you don’t know how old it really is.

A final thought on refactoring.  ForgeRock has refactored the Entitlements Engine and the Secure Token Service (both pretty mammoth projects) all while fixing bugs, responding to RFEs, and implementing new market-driven features such as:

In my opinion, ForgeRock product development is focused on the right areas.

In the blog Radovan comments “OpenAM is in fact (at least) two somehow separate products. There is “AM” part and “FM” part.”


From what I understand, ForgeRock intentionally keeps the federation code independent. This was done so that administrators could easily create and export a “Fedlet” which is essentially a small web application that provides a customer with the code they need to implement SAML in a non-SAML application.  In short, keeping it separate allows for sharing between the OpenAM core services and providing session independent federation capability.  Keeping federation independent has also made it possible to leverage the functionality in other products such as OpenIG.

In the blog Radovan comments “OpenAM debugging is a pain. It is almost uncontrollable, it floods log files with useless data and the little pieces of useful information are lost in it.“


There are several places that you can look in order to debug OpenAM issues and where you look depends mostly on how you have implemented the product.

I will agree with Radovan’s comments that this can be intimidating at first, but as with most enterprise products, knowing where to look and how to interpret the results is as much of an art as it is a science.  For someone new to OpenAM, debugging can be complex.  For skilled OpenAM customers, integrators, and ForgeRock staff, the debug logs yield a goldmine of valuable information that often assists in the rapid diagnosis of a problem.

Note:  Debugging the source code is the realm of experienced developers and ForgeRock does not expect their customers to diagnose product issues.

For those who stick strictly to the open source version, the learning curve can be steep and they have to rely on the open source community for answers (but hey, what do you want for free).  ForgeRock customers, however, will most likely have taken some training on the product to know where to look and what to look for.  In the event that they need to work with ForgeRock’s 24×7 global support desk, then they will most likely be asked to capture these files (as well as configuration information) in order to submit a ticket to ForgeRock.

In the blog Radovan comments that the “OpenAM is still using obsolete technologies such as JAX-RPC. JAX-RPC is a really bad API.” He then goes on to recommend Apache CXF and states “it takes only a handful of lines of code to do. But not in OpenAM.”

Ironically, OpenAM 12 has a modern REST STS along with a WS-TRUST Apache CXF based implementation (exactly what Radovan recommends).  ForgeRock began migrating away from JAX-RPC towards REST-based web services as early as version 11.0.  Now with OpenAM 12, ForgeRock has a modern (fully documented) REST STS along with a WS-TRUST Apache CXF based implementation (exactly what Radovan recommends).

ForgeRock’s commitment to REST is so strong, in fact, that they have invested heavily in the ForgeRock Common REST (CREST) Framework and API – which is used across all of their products.  They are the only vendor that I am aware of that provides REST interfaces across all products in their IAM stack.  This doesn’t mean, however, that ForgeRock can simply eliminate JAX-RPC functionality from the product.  They must continue to support JAX-RPC to maintain backwards compatibility for existing customers that are utilizing this functionality.

In the blog Radovan comments “OpenAM originated between 1998 and 2002. And the better part of the code is stuck in that time as well.”


In general, Radovan focuses on very specific things he does not like in OpenAM, but ignores all the innovations and enhancements that have been implemented since Sun Microsystems.  As mentioned earlier, ForgeRock has continuously refactored, rewritten, and added several major new features to OpenAM.

“ForgeRock also has a mandatory code review process for every code modification. I have experienced that process first-hand when we were cooperating on OpenICF. This process heavily impacts efficiency and that was one of the reasons why we have separated from OpenICF project.”

I understand how in today’s Agile focused world there is the tendency to shy away from old school concepts such as design reviews and code reviews.  I understand the concerns about how they “take forever” and “cost a lot of money”, but consider the actual cost of a bug getting out the door and into a customer’s environment.  The cost is born by both the vendor and the customer but ultimately it is the vendor who incurs a loss of trust, reputation, and ultimately customers.  Call me old school, but I will opt for code reviews every time – especially when my customer’s security is on the line.

Note:  there is an interesting debate on the effectiveness of code reviews on Slashdot.

Conclusion

So, while I respect Radovan’s opinions, I don’t share them and apparently neither do many of the rather large companies and DOD entities that have implemented OpenAM in their own environments.  The DOD is pretty extensive when it comes to product reviews and I have worked with several Fortune 500 companies that have had their hands all up in the code – and still choose to use it.  I have worked with companies that elect to have a minimal IAM implementation team (and rely on ForgeRock for total support) to those that have a team of developers building in and around their IAM solution.  I have seen some pretty impressive integrations between OpenAM log files, debug files, and the actual source code using tools such as Splunk.  And while you don’t need to go to the extent that I have seen some companies go in getting into the code, knowing that you could if you wanted to is a nice thing to have in your back pocket.  That is the benefit of open source code and one of the benefits of working with ForgeRock in general.

I can remember working on an implementation for one rather large IAM vendor where we spent more than three months waiting for a patch.  Every status meeting with the customer became more and more uncomfortable as we waited for the vendor to respond.  With ForgeRock software, I have the opportunity to look into the code and put in my own temporary patch if necessary.  I can even submit the patch to ForgeRock and if they agree with the change (once it has gone through the code review), my patch can then be shared with others and become supported by ForgeRock.

It is the best of both worlds, it is commercial open source!

 

 

 


KatasoftHow to Manage API Authentication Lifecycle on Mobile Devices [Technorati links]

March 23, 2015 03:00 PM

If you didn’t catch it, in the last article I explained how to know to build and deploy a real mobile app that uses OAuth2 authentication for your private API service.

In this article, I’m going to cover a tightly related topic: how to properly manage your OAuth2 API token lifecycle.

Because things like token expiration and revocation are so paramount to API security, I figured they deserved their own discussion here.

Token Expiration

One of the most common questions we get here at Stormpath, when talking about token authentication for mobile devices, is about token expiration.

Developers typically ask us this:

“This OAuth2 stuff with JSON Web Tokens sounds good, but how long should I allow my access tokens to exist before expiring them? I don’t want to force my users to re-authenticate every hour. That would suck.”

This is an excellent question. The answer is a bit tricky though. Here are some general rules:

If you’re dealing with any form of sensitive data (money, banking data, etc.), don’t bother storing access tokens on the mobile device at all. When you authenticate the user and get an access token, just keep it in memory. When a users closes your app, your memory will be cleaned up, and the token will be gone. This will force users to log into your app every time they open it, but that’s a good thing.

For extra security, make sure your tokens themselves expire after a short period of time (eg: 1 hour) — this way, even if an attacker somehow compromises your access token, it’ll still expire fairly quickly.

If you’re building an app that holds sensitive data, that’s not related to money, you’re probably fine forcing tokens to expire somewhere around the range of every month. For instance, if I was building a mobile app that allowed users to take fitness progress photos of themselves to review at a later time, I’d use a 1 month setting.

The above setting is a good idea as it doesn’t annoy users, requiring them to re-input their credentials every time the open the app, but also doesn’t expose them to unnecessary risk. In the worst case scenario above, if a user’s access token is compromised, an attacker might be able to view this person’s progress photos for up to one month.

If you’re building a massive consumer application, like a game or social application, you should probably use a much more liberal expiration time: anywhere from 6 months to 1 year.

For these sorts of applications, there is very little risk storing an access token for a long period of time, as the service contains only low-value content that can’t really hurt a user much if leaked. If a token is compromised, it’s not the end of the world.

This strategy also has the benefit of not annoying users by prompting them to re-authenticate very frequently. For many mass consumer applications, signing in is considered a big pain, so you don’t want to do anything to break down your user experience.

Token Revocation

Let’s now talk about token revocation. What do you do if an access token is compromised?

Firstly, let’s discuss the odds of this happening. In general: they are very low. Using the recommended data stores for Android and iOS will greatly reduce the risk of your tokens being compromised, as the operating system provides a lot of built-in protections for storing sensitive data like access tokens.

But, let’s assume for this exercise that a user using your mobile app lost their phone, a saavy hacker grabbed it, broke through the OS-level protections, and was able to extract your API service’s access token.

What do you do?

This is where token revocation comes into play.

It is, in general, a good idea to support token revocation for your API service. What this means is that you should have a way to strategically invalidate tokens after issuing them.

NOTE: Many API services do not support token revocation, and as such, simply rely on token expiration times to handle abuse issues.

Supporting token revocation means you’ll have to go through an extra few steps when building this stuff out:

  1. You’ll need to store all access tokens (JWTs) that you generate for clients in a database. This way, you can see what tokens you’ve previously assigned, and which ones are valid.

  2. You’ll need to write an API endpoint which accepts an access token (or user credentials) and removes either the specific access token or all access tokens from a user’s account.

For those of you wondering how this works, the official OAuth2 Revocation Spec actually talks about it in very simple terms. The gist of it is that you write an endpoint like /revoke that accepts POST requests, with the token or credentials in the body of the request.

The idea is basically this though: once you know that a given access token, or user account has been compromised, you’ll issue the appropriate revocation API request to your private API service. You’ll either revoke:

Make sense? Great!

Simpler Solutions

If you’re planning on writing your own API service like the ones discussed in this article, you’ll want to write as little of the actual security code as possible. Actual implementation details can be quite a bit more complex, depending on your framework and programming langugage.

Stormpath is an API service that stores your user accounts securely, manages API keys, handles OAuth2 flows, and also provides tons of convenience methods / functions for working with user data, doing social login, and a variety of other things.

If you have any questions (this stuff can be confusing), feel free to email us directly — we really don’t mind answering questions! You can, of course, also just leave a comment below. Either way works!

-Randall

Julian BondCalifornia is in the grip of a record drought tied to climate change. This water crisis holds the potential... [Technorati links]

March 23, 2015 02:01 PM
I wonder if anyone appreciates how serious, how close and how inevitable this is. Are the answers really: "extremely", "24 months" and "totally"?

Bill Smith originally shared this post:
California is in the grip of a record drought tied to climate change. This water crisis holds the potential to collapse California’s economy if the state truly runs out of water. What an irony that the state most focused on global warming may be its first victim.
California anchors U.S. economy. It has the seventh largest economy in the world, approximately twice the size of Texas. California’s economy is so large and impacts so many other businesses that its potential collapse due to a water crisis will impact the pocketbooks of most Americans.

#California #Drought




 Climate Change Puts California Economy at Risk of Collapse »
California faces one more year of water supply -- a water crisis that holds the potential to collapse the state’s economy. What an irony that the state most focused on global warming may be its first catastrophic economic collapse victim.

[from: Google+ Posts]

KatasoftThe Ultimate Guide to Mobile API Security [Technorati links]

March 23, 2015 02:00 PM

Mobile API consumption is a topic that comes up frequently on both Stack Overflow and the Stormpath support channel. It’s a problem that has already been solved, but requires a lot of prerequisite knowledge and sufficient understanding in order to implement properly.

This post will walk you through everything you need to know to properly secure a REST API for consumption on mobile devices, whether you’re building a mobile app that needs to access a REST API, or writing a REST API and planning to have developers write mobile apps that work with your API service.

My goal is to not only explain how to properly secure your REST API for mobile developers, but to also explain how the entire exchange of credentials works from start to finish, how to recover from security breaches, and much more.

The Problem with Mobile API Security

Before we dive into how to properly secure your REST API for mobile developers — let’s first discuss what makes mobile authentication different from traditional API authentication in the first place!

The most basic form of API authentication is typically known as HTTP Basic Authentication.

The way it works is pretty simple for both the people writing API services, and the developers that consume them:

HTTP Basic Authentication is great because it’s simple. A developer can request an API key, and easily authenticate to the API service using this key.

What makes HTTP Basic Authentication a bad option for mobile apps is that you need to actually store the API key securely in order for things to work. In addition to this, HTTP Basic Authentication requires that your raw API keys be sent over the wire for every request, thereby increasing the chance of exploitation in the long run (the less you use your credentials, the better).

In most cases, this is impractical as there’s no way to safely embed your API keys into a mobile app that is distributed to many users.

For instance, if you build a mobile app with your API keys embedded inside of it, a savvy user could reverse engineer your app, exposing this API key, and abusing your service.

This is why HTTP Basic Authentication is not optimal in untrusted environments, like web browsers and mobile applications.

NOTE: Like all authentication protocols, HTTP Basic Authentication must be used over SSL at all times.

Which brings us to our next section…

Introducing OAuth2 for Mobile API Security

You’ve probably heard of OAuth before, and the debate about what it is and is not good for. Let’s be clear: OAuth2 is an excellent protocol for securing API services from untrusted devices, and it provides a nice way to authenticate mobile users via what is called token authentication.

Here’s how OAuth2 token authentication works from a user perspective (OAuth2 calls this the password grant flow):

  1. A user opens up your mobile app and is prompted for their username or email and password.
  2. You send a POST request from your mobile app to your API service with the user’s username or email and password data included (OVER SSL!).
  3. You validate the user credentials, and create an access token for the user that expires after a certain amount of time.
  4. You store this access token on the mobile device, treating it like an API key which lets you access your API service.
  5. Once the access token expires and no longer works, you re-prompt the user for their username or email and password.

What makes OAuth2 great for securing APIs is that it doesn’t require you to store API keys in an unsafe environment. Instead, it will generate access tokens that can be stored in an untrusted environment temporarily.

This is great because even if an attacker somehow manages to get a hold of your temporary access token, it will expire! This reduces damage potential (we’ll cover this in more depth in our next article).

Now, when your API service generates an Oauth2 access token that your mobile app needs, of course you’ll need to store this in your mobile app somewhere.

BUT WHERE?!

Well, there are different places this token should be stored depending on what platform you’re developing against. If you’re writing an Android app, for instance, you’ll want to store all access tokens in SharedPreferences (here’s the API docs you need to make it work). If you’re an iOS developer, you will want to store your access tokens in the Keychain.

If you still have questions, the following two StackOverflow posts will be very useful — they explain not only how you should store access tokens a specific way, but why as well:

It’s all starting to come together now, right? Great!

You should now have a high level of understanding in regards to how OAuth2 can help you, why you should use it, and roughly how it works.

Which brings us to the next section…

Access Tokens

Let’s talk about access tokens for a little bit. What the heck are they, anyway? Are they randomly generated numbers? Are they uuids? Are they something else? AND WHY?!

Great questions!

Here’s the short answer: an access token can technically be anything you want:

As long as you can:

You’re golden!

BUT… With that said, there are some conventions you’ll probably want to follow.

Instead of handling all this stuff yourself, you can instead create an access token that’s a JWT (JSON Web Token). It’s a relatively new specification that allows you to generate access tokens that:

JWTs also look like a randomly generated string: so you can always store them as strings when using them. This makes them really convenient to use in place of a traditional access token as they’re basically the same thing, except with way more benefits.

JWTs are almost always cryptographically signed. The way they work is like so:

Now, from the mobile client, you can view whatever is stored in the JWT. So if I have a JWT, I can easily check to see what JSON data is inside it. Usually it’ll be something like:

{
  "user_id": "e3457285-b604-4990-b902-960bcadb0693",
  "scope": "can-read can-write"
}

Now, this is a 100% fictional example, of course, but you get the idea: if I have a copy of this JWT token, I can see the JSON data above, yey!

But I can also verify that it is still valid, because the JWT spec supports expiring tokens automatically. So when you’re using your JWT library in whatever language you’re writing, you’ll be able to verify that the JWT you have is valid and hasn’t yet expired (cool).

This means that if you use a JWT to access an API service, you’ll be able to tell whether or not your API call will work by simply validating the JWT! No API call required!

Now, once you’ve got a valid JWT, you can also do cool stuff with it on the server-side.

Let’s say you’ve given out a JWT to a mobile app that contains the following data:

{
  "user_id": "e3457285-b604-4990-b902-960bcadb0693",
  "scope": "can-read can-write"
}

But let’s say some malicious program on the mobile app is able to modify your JWT so that it says:

{
  "user_id": "e3457285-b604-4990-b902-960bcadb0693",
  "scope": "can-read can-write can-delete"
}

See how I added in the can-delete permission there? What will happen if this modified token is sent to our API server? Will it work? Will our server accept this modified JWT?

NOPE!!

When your API service receives this JWT and validates it, it’ll do a few things:

This is nice functionality, as it makes handling verification / expiration / security a lot simpler.

The only thing you need to keep in mind when working with JWTs is this: you should only store stuff you don’t mind exposing publicly.

As long as you follow the rule above, you really can’t go wrong with using JWTs.

The two pieces of information you’ll typically store inside of a JWT are:

So, that just about sums up JWTs. Hopefully you now know why you should be using them as your OAuth access tokens — they provide:

Now, moving on — let’s talk about how this all works together…

How it All Works

In this section we’re going to get into the nitty gritty and cover the entire flow from start to finish, with all the low-level technical details you need to build a secure API service that can be securely consumed from a mobile device.

Ready? Let’s do this.

First off, here’s how things will look when we’re done. You’ll notice each image has a little picture next to it. That’s because I’m going to explain each step in detail below.

OAuth2 Flow

So, take a look at that image above, and then follow along.

1. User Opens App

The user opens the app! Next!

2. App Asks for Credentials

Since we’re going to be using the OAuth2 password grant type scheme to authenticate users against our API service, your app needs to ask the user for their username or email and password.

Almost all mobile apps ask for this nowadays, so users are used to typing their information in.

3. User Enters their Credentials

Next, the user enters their credentials into your app. Bam. Done. Next!

4. App Sends POST Requests to API Service

This is where the initial OAuth2 flow begins. What you’ll be doing is essentially making a simple HTTP POST request from your mobile app to your API service.

Here’s a command line POST request example using cURL:

$ curl --form 'grant_type=password&username=USERNAMEOREMAIL&password=PASSWORD' https://api.example.com/v1/oauth

What we’re doing here is POST’ing the username or email and password to our API service using the OAuth2 password grant type: (there are several grant types, but this is the one we’ll be talking about here as it’s the only relevant one when discussing building your own mobile-accessible API).

NOTE: See how we’re sending the body of our POST request as form content? That is, application/www-x-form-urlencoded? This is what the OAuth2 spec wants =)

5. API Server Authenticates the User

What happens next is that your API service retrieves the incoming username or email and password data and validates the user’s credentials.

This step is very platform specific, but typically works like so:

  1. You retrieve the user account from your database by username or email.
  2. You compare the password hash from your database to the password received from the incoming API request. NOTE: Hopefully you store your passwords with bcrypt!
  3. If the credentials are valid (the user exists, and the password matches), then you can move onto the next step. If not, you’ll return an error response to the app, letting it know that either the username or email and password are invalid.

6. API Server Generates a JWT that the App Stores

Now that you’ve authenticated the app’s OAuth2 request, you need to generate an access token for the app. To do this, you’ll use a JWT library to generate a useful access token, then return it to the app.

Here’s how you’ll do it:

  1. Using whatever JWT library is available for your language, you’ll create a JWT that includes JSON data which holds the user ID (from your database, typically), all user permissions (if you have any), and any other data you need the app to immediately access.

  2. Once you’ve generated a JWT, you’ll return a JSON response to the app that looks something like this:

     {
       "access_token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJEUExSSTVUTEVNMjFTQzNER0xHUjBJOFpYIiwiaXNzIjoiaHR0cHM6Ly9hcGkuc3Rvcm1wYXRoLmNvbS92MS9hcHBsaWNhdGlvbnMvNWpvQVVKdFZONHNkT3dUVVJEc0VDNSIsImlhdCI6MTQwNjY1OTkxMCwiZXhwIjoxNDA2NjYzNTEwLCJzY29wZSI6IiJ9.ypDMDMMCRCtDhWPMMc9l_Q-O-rj5LATalHYa3droYkY",
       "token_type": "bearer",
       "expires_in": 3600
     }
    

    As you can see above, our JSON response contains 3 fields. The first field access_token, is the actual OAuth2 access token that the mobile app will be using from this point forward in order to make authenticated API requests.

    The second field, token_type, simply tells the mobile app what type of access token we’re providing — in this case, we’re providing an OAuth2 Bearer token. I’ll talk about this more later on.

    Lastly, the third field provided is the expires_in field. This is basically the number of seconds for which the supplied access token is valid.

    In the example above, what we’re saying is that we’re giving this mobile app an access token which can be used to access our private API for up to 1 hour — no more. After 1 hour (3600 seconds) this access token will expire, and any future API calls we make using that access token will fail.

  3. On the mobile app side of things, you’ll retrieve this JSON response, parse out the access token that was provided by the API server, and then store it locally in a secure location. On Android, this means SharedPreferences, on iOS, this means Keychain.

Now that you’ve got an access token securely stored on the mobile device, you can use it for making all subsequent API requests to your API server.

Not bad, right?

7. App Makes Authenticated Requests to API Server

All that’s left to do now is to make secure API requests from your mobile app to your API service. The way you do this is simple.

In the last step, your mobile app was given an OAuth2 access token, which it then stored locally on the device.

In order to successfully make API requests using this token, you’ll need to create an HTTP Authorization header that uses this token to identify your user.

To do this, what you’ll do is insert your access token along with the word Bearer into the HTTP Authorization header. Here’s how this might look using cURL:

$ curl -H "Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJEUExSSTVUTEVNMjFTQzNER0xHUjBJOFpYIiwiaXNzIjoiaHR0cHM6Ly9hcGkuc3Rvcm1wYXRoLmNvbS92MS9hcHBsaWNhdGlvbnMvNWpvQVVKdFZONHNkT3dUVVJEc0VDNSIsImlhdCI6MTQwNjY1OTkxMCwiZXhwIjoxNDA2NjYzNTEwLCJzY29wZSI6IiJ9.ypDMDMMCRCtDhWPMMc9l_Q-O-rj5LATalHYa3droYkY" https://api.example.com/v1/test

In the end, your Authorization header will look like this: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJEUExSSTVUTEVNMjFTQzNER0xHUjBJOFpYIiwiaXNzIjoiaHR0cHM6Ly9hcGkuc3Rvcm1wYXRoLmNvbS92MS9hcHBsaWNhdGlvbnMvNWpvQVVKdFZONHNkT3dUVVJEc0VDNSIsImlhdCI6MTQwNjY1OTkxMCwiZXhwIjoxNDA2NjYzNTEwLCJzY29wZSI6IiJ9.ypDMDMMCRCtDhWPMMc9l_Q-O-rj5LATalHYa3droYkY.

When your API service receives the HTTP request, what it will do is this:

  1. Inspect the HTTP Authorization header value, and see that it starts with the word Bearer.

  2. Next, it’ll grab the following string value, referring to this as the access token.

  3. It’ll then validate this access token (JWT) using a JWT library. This step ensures the token is valid, untampered with, and not yet expired.

  4. It’ll then retrieve the user’s ID and permissions out of the token (permissions are optional, of course).

  5. It’ll then retrieve the user account from the user database.

  6. Lastly, it will ensure that what the user is trying to do is allowed, eg: the user must be allowed to do what they’re trying to do. After this is done, the API server will simply process the API request and return the result normally.

Nothing anything familiar about this flow? You should! It’s almost the exact same way HTTP Basic Authentication works, with one main difference in execution: the HTTP Authorization header is slightly different (Bearer vs Basic).

This is the end of the “How it All Works” section. In the next article, we’ll talk about all the other things you need to know about managing API authentication on mobile devices.

Simpler Solutions

As this is a high level article meant to illustrate how to properly write an API service that can be consumed from mobile devices, I’m not going to get into language specific implementation details here — however, I do want to cover something I consider to be very important.

If you’re planning on writing your own API service like the ones discussed in this article, you’ll want to write as little of the actual security code as possible. While I’ve done my best to summarize exactly what needs to be done in each step in the process, actual implementation details can be quite a bit more complex.

It’s usually a good idea to find a popular OAuth2 library for your favorite programming language or framework, and use that to help offload some of the burden of writing this sort of thing yourself.

Lastly, if you really want to simplify things, you might want to sign up for our service: Stormpath. Stormpath is an API service that stores your user accounts securely, manages API keys, handles OAuth2 flows, and also provides tons of convenience methods / functions for working with user data, doing social login, and a variety of other things.

Stormpath is also totally, 100% free to use. You can start using it RIGHT NOW in your applications, and BAM, things will just work. We only charge you for real projects — feel free to deploy as many side projects as you’d like on our platform for no cost =)

Hopefully this article has helped you figure out the best way to handle API authentication for your mobile devices. If you have any questions (this stuff can be confusing), feel free to email us directly!

-Randall

Nat Sakimuraオバマ大統領、デビッド・リコードンをホワイトハウスの「情報技術長官(?)」に抜擢 [Technorati links]

March 23, 2015 12:53 PM

photo by Brian Solis (2009) CC-BY。今はもうちょっと老けていると思われ。

やや旧聞ですね。日本時間の金曜日の朝に仲間内ではひとしきり盛り上がったのですが、いかんせん金曜日は時間がなく、その後体調を崩してずっと今さっきまで寝ていたので…。

Yahoo! Techによる、デビッドがホワイトハウスの「director of information technology」に抜擢されたというニュース[1]です。これって、日本語だと、「情報技術長官」で良いのですかね…。Wikipediaの米国政府用語[2]によると、Directorは「長官」らしいので…。(誰か詳しい人教えて…。)

彼は、米国のOpenID® Foundation立ち上げの立役者兼初代副理事長で、OpenID® Authentication 2.0の主著者でもありますね。当時はSixApart→Verisign Laboで働いていたのですが、その後Facebookに行って、FacebookのIdentityのOAuth 2.0化を途中までやって[3]、Open Compute Project[4] の方に行ってそちらでも業績を残しました。

写真的にはオッサンに見えますが、まだ20代です。わりと親日的。日本に来た時には、一緒に食事に行ったりしています。服装はTシャツ短パンサンダルが定番。ホワイトハウスに入ったら、さすがにスーツ姿になるのでしょうかね(w。

ホワイトハウスの声明[5]によると、ホワイトハウスでの彼の役割は

だそうです。

ちなみに、ホワイトハウスは、来年度の予算として、25省庁のデジタル・チームの編成に$105M (約120億円)を要求しているらしいので、これからもシリコンバレーからの引き抜きが続くでしょう。

多分、みんな給料は下がるんですが、その後民間に戻った時の給料がぐっと上がるんですよね。だから、キャリア形成の一環として政府に行く。デービッドの場合も、オバマ政権はあと2年なわけで、その間政府で働いて、また民間に戻るつもりでしょう。

日本は政府にやる気があっても、民間に戻るのがなかなか厳しそうですしね…。それに、体制的に力を振るえるかも謎ですし。悩ましいところです。

 

[1] Alyssa Bereznak, “Exclusive: Facebook Engineering Director Is Headed to the White House”, (2015-03-19), Yahoo! Tech,  https://www.yahoo.com/tech/exclusive-facebook-engineering-director-is-headed-114060505054.html

[2] Wikipedia 米国政府用語一覧 http://ja.wikipedia.org/wiki/%E7%B1%B3%E5%9B%BD%E6%94%BF%E5%BA%9C%E7%94%A8%E8%AA%9E%E4%B8%80%E8%A6%A7

[3] 結局、そこで止まっているのがなんとも…。なので、FBは未だにOAuth 2.0 draft 10とかそのくらい…。

[4] ざっくり言うと、GoogleやFacebookスタイルのサーバをオープン化して普及しようというもの。http://www.opencompute.org/

[5] Anita Breckenridge, “President Obama Names David Recordon as Director of White House Information Technology”, The Whitehouse Blog, (2015-03-19),  https://www.whitehouse.gov/blog/2015/03/19/president-obama-names-david-recordon-director-white-house-information-technology

[6] Mariella Moon, “White House names top Facebook engineer as first director of IT“, Engadgethttp://www.engadget.com/2015/03/20/white-house-recordon-facebook-director-it/

Nat Sakimura夏井先生の日本の個人情報保護関連法制に関する論評に激しく同意した件 [Technorati links]

March 23, 2015 11:50 AM

夏井先生のプログ[1]に、いわゆる「プライバシーフリーク本」[2]の論評に形を借りた、日本の個人情報保護法制に関する論評が載っていた。激しく同意だ。

先生曰く

 「個人情報保護法は行政規範だ」という当たり前の常識をもっと徹底して周知する必要がある

これ、法律家はさておき、一般には全く理解されていないんじゃないかと思うんですよね。なので、先日のOpenID BizDay[3]では、ここのところを大きく取り上げたのです。

基本に戻って不法行為法(Tort)の法解釈論として考えた方が妥当な結論を得られる場合が多い
(…[中略]…)
その際に最も参考になるのは,やはりプロッサーの類型論で,このような古典的理論構成のほうが実は有用性が高い。

技術的にも実は親和性が高いのも良い所です。こうした類型論は、そのまま「Objectives」と「Threat」に転換できるので、「Control」も設計しやすいのですよね。

しかし,個人情報保護法の適用によっては個人情報の本人の直接的な被害救済にはつながらない。そのような法律では
ないのだ。私見としては,欠陥が多いので,全面改正すべきだと思っている。

これは、多くの人が感じていながら言えていなかったことかも知れません。
「最近私もゼロクリアで書いてみたら?」ということはほうぼうで言っていますがが。

日本の法学者が本来やるべきことは,日本国の民法に規定されている不法行為法(特に723条)の解釈・運用でどうにかやれないかどうか可能な全てを努力を尽くすということだと思う。

これも本当にそのとおりだと思います。アメリカで話していると、まずそこから始まりますし。実効的なプライバシー保護はアメリカのほうがEUより効いているのではないかというのも、そこそこ聞きますよね。ただ、アメリカと違って日本はこの方面のいろいろな手段が整っていないようで、そのあたりを整えて行くというタスクもありそうです。

一方で、EUのデータ保護行政を使ったWTO抜け道問題に対する対処とかもあるので、実体的なプライバシー保護とは別に、貿易摩擦問題の外交的手段としての個人情報保護法というのももちろん有用なわけで、だったらそっちに舵を切らなきゃいけないわけですが、どうにも中途半端ですよね、今のままでは。


[1] 夏井高人: “鈴木正朝・高木浩光・山本一郎『ニッポンの個人情報 -「個人を特定する情報が個人情報である」と信じているすべての方へ』”, サイバー法ブログ, (2015/3/23), http://cyberlaw.cocolog-nifty.com/blog/2015/03/post-bafd.html


[2] 鈴木正朝・高木浩光・山本一郎『ニッポンの個人情報 -「個人を特定する情報が個人情報である」と信じているすべての方へ』,  翔泳社 (2015/2/20)


[3] 崎村夏彦:『セミナー:企業にとっての実践的プライバシー保護~個人情報保護法は免罪符にはならない』, @_Nat Zone, (2015-03-01) http://www.sakimura.org/2015/03/2911/

 

Christopher Allen - AlacrityMini Resume Card for Conference Season [Technorati links]

March 23, 2015 06:55 AM

Between the business of the March/April conference season and leaving Blackphone, I've run out of business cards. Rather than rush to print a bunch of new ones, I'm created this mini-resume for digital sharing and a two-sided Avery business card version that I am printing on my laser printer and sharing.

Not as pretty as my old Life With Alacrity cards, but effective in getting across the diversity of my professional experience and interests.

Christopher Allen Micro Resume

As someone who teaches Personal Branding in my courses at BGI@Pinchot.edu, I always find it hard to practice as I preach to ask for advice and suggestions. In this case I'm trying to tame my three-headed Cerebus of a profession with Privacy/Crypto/Developer Community, an Innovative Business Educator/Instructional Designer head, and my Collaborative Tools, Processes, Games and Play head. All come tied together in my body as ultimately being about collaboration, but it is hard to explain some of the correspondences.

March 22, 2015

Julian BondElectronic music, released on cassette labels, from Novosibirsk. [Technorati links]

March 22, 2015 07:24 PM
Electronic music, released on cassette labels, from Novosibirsk.

http://calvertjournal.com/articles/show/3744/siberian-electronic-music-scene-klammlang-cassettes
 Breaking the ice: the independent cassette label putting Siberian electronica on the map »
A feature about the Klammklang label and the Siberian electronic music scene

[from: Google+ Posts]
March 21, 2015

Julian BondIf Europe is getting worried about immigrants from N Africa taking the perilous journey to Italy, there's... [Technorati links]

March 21, 2015 06:30 PM
If Europe is getting worried about immigrants from N Africa taking the perilous journey to Italy, there's an obvious solution. Make the countries of the southern and eastern mediterranean part of the EU.
https://deepresource.wordpress.com/2015/03/21/europe-defends-itself/

Note: I hate contentious (and non-contentious) blogs that don't allow comments. But then I don't allow comments on my own blog because I can't be bothered to moderate them.
 Europe Defends Itself »
German NWO-magazine and US State Department mouth piece der Spiegel hates to say it, but Europe seems to be getting serious about defending itself against the waves of invaders from the South. In t...

[from: Google+ Posts]

Kantara InitiativeNon-Profits on the Loose @ RSA 2015 [Technorati links]

March 21, 2015 01:23 AM

Tuesday April 21st from 5-8pm @ the Minna Gallery
Join Kantara and partners at the 2015 “Non-Profits On the Loose.”

HOW TO GET IN:

Enter using your RSA badge or bring the invite below for exclusive access to this annual networking event. Break bread with leading movers and shakers from the Identity Management and Cyber Security industries. We look forward to seeing you there!

IDENTITY DRINK SPECIALS:

This year try the “UMArtini” and celebrate the achievements of Kantara’s award winning UMA – User Managed Access shaping identity for a connected world.

THANKS:

Kantara extends gratitude to our generous sponsors: Experian Public Sector, ForgeRock, and the IEEE-SA. Their support enables community growth for this night of heavy-hitter networking and fun.

NPOTL-2015

March 20, 2015

Vittorio Bertocci - MicrosoftAzure AD Token Lifetime [Technorati links]

March 20, 2015 04:10 PM

For how long are AAD-issued tokens valid? I have mentioned this in scattered posts, but this AM Danny reminded me of how frequent this Q really is – and as such, it deserves its own entry.

As of today, the rules are pretty simple:

That’s it, short and sweet Smile

GluuOSCON: Crypto For Kids [Technorati links]

March 20, 2015 12:02 AM

OSCON_LOGO

 

Description

Crypto is the ultimate secret message machine! This workshop will first introduce the history of crypto, and some of the basic mathematical underpinnings. Then through fun activities and games, kids will get some hands on experience using linux crypto tools and the python programming language.

Abstract

Without crypto, we could not have security or privacy on the Internet. You would not be able to pay for something on the Web. In fact, Crypto is more important than the Web–because every Internet service–email, video, voice communication–needs it when private communication is required.

But how has crypto technology changed since World War II, when teams of English and American mathematicians broke Hitler’s enigma protocol? Where did modern crypto come from? Who invented it?

In the two hours for this class, we’ll use some new open source tools for cryptography to help kids understand what makes the technology tick. We’ll highlight two-way encryption, public-private key encryption, crypto signing, X.509 certificates, hierarchical public key infrastructures, and physical access tokens. In the course of this, we’ll also introduce some basic python coding scripts that will enable the kids to send each other secret messages.

March 19, 2015

Bill Nelson - Easy IdentityOpenDJ Access Control Explained [Technorati links]

March 19, 2015 11:04 PM

PIIAn OpenDJ implementation will contain certain data that you would like to explicitly grant or deny access to.  Personally identifiable information (PII) such as a user’s home telephone number, their address, birth date, or simply their email address might be required by certain team members or applications, but it might be a good idea to keep this type of information private from others. On the other hand, you may want their office phone number published for everyone within the company to see but limit access to this data outside of the company.

Controlling users’ access to different types of information forms the basis of access control in OpenDJ and consists of the following two stages:

Before you are allowed to perform any action within OpenDJ, it must first know who you are.  Once your identity has been established, OpenDJ can then ascertain the rights you have to perform actions either on the data contained in its database(s) or within the OpenDJ process, itself.

 

Stop

Access Control = Authentication + Authorization

 

Note:  Access control is not defined in any of the LDAP RFCs so the manner in which directory servers implement access control varies from vendor to vendor.  Many directory services (including OpenDJ) follow the LDAP v3 syntax introduced by Netscape.

 

Access control is implemented with an operational attribute called aci (which stands for access control instruction).  Access control instructions can be configured globally (the entire OpenDJ instance) or added to specific directory entries.

 

1.      Global ACIs:

 

Global ACIs are not associated with directory entries and therefore are not available when searching against a typical OpenDJ suffix (such as dc=example,dc=com).  Instead, Global ACIs are considered configuration objects and may be found in the configuration suffix (cn=config).  You can find the currently configured Global ACIs by opening the config.ldif file and locating the entry for the “Access Control Handler”.  Or, you can search for “cn=Access Control Handler” in the configuration suffix (cn=config) as follows:

./ldapsearch –h hostname –p portnumber –D “cn=directory manager” –w “password” -b “cn=config” -s sub “cn=Access Control Handler” ds-cfg-global-aci

 

This returns the following results on a freshly installed (unchanged) OpenDJ server.

 

dn: cn=Access Control Handler,cn=config

ds-cfg-global-aci: (extop=”1.3.6.1.4.1.26027.1.6.1 || 1.3.6.1.4.1.26027.1.6.3 || 1.3.6.1.4.1.4203.1.11.1 || 1.3.6.1.4.1.1466.20037 || 1.3.6.1.4.1.4203.1.11.3″) (version 3.0; acl “Anonymous extended operation access”; allow(read) userdn=”ldap:///anyone”;)

ds-cfg-global-aci: (target=”ldap:///”)(targetscope=”base”)(targetattr=”objectClass||namingContexts||supportedAuthPasswordSchemes||supportedControl||supportedExtension||supportedFeatures||supportedLDAPVersion||supportedSASLMechanisms||supportedTLSCiphers||supportedTLSProtocols||vendorName||vendorVersion”)(version 3.0; acl “User-Visible Root DSE Operational Attributes”; allow (read,search,compare) userdn=”ldap:///anyone”;)

ds-cfg-global-aci: target=”ldap:///cn=schema”)(targetattr=”attributeTypes||objectClasses”)(version 3.0;acl “Modify schema”; allow (write)(userdn = “ldap:///uid=openam,ou=Service Accounts,dc=example,dc=com”);)

ds-cfg-global-aci: target=”ldap:///cn=schema”)(targetscope=”base”)(targetattr=” objectClass||attributeTypes||dITContentRules||dITStructureRules||ldapSyntaxes||matchingRules||matchingRuleUse||nameForms||objectClasses”)(version 3.0; acl “User-Visible Schema Operational Attributes”; allow (read,search,compare) userdn=”ldap:///anyone”;)

ds-cfg-global-aci: (target=”ldap:///dc=replicationchanges”)(targetattr=”*”)(version 3.0; acl “Replication backend access”; deny (all) userdn=”ldap:///anyone”;)

ds-cfg-global-aci: (targetattr!=”userPassword||authPassword||changes||changeNumber||changeType||changeTime||targetDN||newRDN||newSuperior||deleteOldRDN”)(version 3.0; acl “Anonymous read access”; allow (read,search,compare) userdn=”ldap:///anyone”;)

ds-cfg-global-aci: (targetattr=”audio||authPassword||description||displayName||givenName||homePhone||homePostalAddress||initials||jpegPhoto||labeledURI||mobile||pager||postalAddress||postalCode||preferredLanguage||telephoneNumber||userPassword”)(version 3.0; acl “Self entry modification”; allow (write) userdn=”ldap:///self”;)

ds-cfg-global-aci: (targetattr=”createTimestamp||creatorsName||modifiersName||modifyTimestamp||entryDN||entryUUID||subschemaSubentry||etag||governingStructureRule||structuralObjectClass||hasSubordinates||numSubordinates”)(version 3.0; acl “User-Visible Operational Attributes”; allow (read,search,compare) userdn=”ldap:///anyone”;)

ds-cfg-global-aci: (targetattr=”userPassword||authPassword”)(version 3.0; acl “Self entry read”; allow (read,search,compare) userdn=”ldap:///self”;)

ds-cfg-global-aci: (targetcontrol=”1.3.6.1.1.12 || 1.3.6.1.1.13.1 || 1.3.6.1.1.13.2 || 1.2.840.113556.1.4.319 || 1.2.826.0.1.3344810.2.3 || 2.16.840.1.113730.3.4.18 || 2.16.840.1.113730.3.4.9 || 1.2.840.113556.1.4.473 || 1.3.6.1.4.1.42.2.27.9.5.9″) (version 3.0; acl “Authenticated users control access”; allow(read) userdn=”ldap:///all”;)

ds-cfg-global-aci: (targetcontrol=”2.16.840.1.113730.3.4.2 || 2.16.840.1.113730.3.4.17 || 2.16.840.1.113730.3.4.19 || 1.3.6.1.4.1.4203.1.10.2 || 1.3.6.1.4.1.42.2.27.8.5.1 || 2.16.840.1.113730.3.4.16 || 1.2.840.113556.1.4.1413″) (version 3.0; acl “Anonymous control access”; allow(read) userdn=”ldap:///anyone”;)

 

2.      Entry-Based ACIs:

 

Access control instructions may also be applied to any entry in the directory server.  This allows fine grained access control to be applied anywhere in the directory information tree and therefore affects the scope of the ACI.

Note:  Placement has a direct effect on the entry where the ACI is applied as well as any children of that entry.

You can obtain a list of all ACIs configured in your server (sans the Global ACIs) by performing the following search:

 

./ldapsearch –h hostname –p portnumber –D “cn=directory manager” –w “password” –b “dc=example,dc=com” –s sub aci=* aci

 

By default, there are no ACIs configured at the entry level.  The following is an example of ACIs that might be returned if you did have ACIs configured, however.

 

dn: dc=example,dc=com

aci: (targetattr=”*”)(version 3.0;acl “Allow entry search”; allow (search,read)(userdn = “ldap:///uid=openam,ou=Service Accounts,dc=example,dc=com”);)

aci: (targetattr=”*”)(version 3.0;acl “Modify config entry”; allow (write)(userdn = “ldap:///uid=openam,ou=Service Accounts,dc=example,dc=com”);)

aci: (targetcontrol=”2.16.840.1.113730.3.4.3″)(version 3.0;acl “Allow persistent search”; allow (search, read)(userdn = “ldap:///uid=openam,ou=Service Accounts,dc=example,dc=com”);)

aci: (version 3.0;acl “Add config entry”; allow (add)(userdn = “ldap:///uid=openam,ou=Service Accounts,dc=example,dc=com”);)

aci: (version 3.0;acl “Delete config entry”; allow (delete)(userdn = “ldap:///uid=openam,ou=Service Accounts,dc=example,dc=com”); )

dn: ou=Applications,dc=example,dc=com

aci: (target =”ldap:///ou=Applications,dc=example,dc=com”)(targetattr=”*”)(version 3.0;acl “Allow Application Config Access to Web UI Admin”; allow (all)(userdn = “ldap:///uid=webui,ou=Applications,dc=example,dc=com”); )

ACI Syntax:

 

The syntax for access control instructions is not specific to OpenDJ, in fact, for the most part, it shares the same syntax with the Oracle Directory Server Enterprise Edition (“ODSEE”).  This is mainly due the common lineage with Sun Microsystems, but other directory servers do not use the same syntax and this makes migration more difficult (even the schema in both servers contains an attribute called aci).  If you export OpenDJ directory entries to LDIF and attempt to import them into another vendor’s server, the aci statements would either be ignored, or worse, might have unpredictable results, altogether.

The following syntax is used by the OpenDJ server.

 

ACISyntax

 

Access control instructions require three inputs: target, permission, and subject. The target specifies the entries to which the aci applies. The subject applies to the client that is performing the operation and the permissions specify what the subject is allowed to do. You can create some very powerful access control based on these three inputs.

The syntax also includes the version of the aci syntax, version 3.0. This is the aci syntax version, not the LDAP version. Finally, the syntax allows you to enter a human readable name. This allows you to easily search for and identify access control statements in the directory server.

Note:  Refer to the OpenDJ Administration Guide for a more detailed description of the aci A components.

The following is an example of an ACI that permits a user to write to their own password and mobile phone attributes.

 

SampleACI

 

You cannot read the ACI from left to right, or even right to left, you simply have to dive right in and look for the information required to understand the intent of the ACI.  If you have been working with ACIs for some time, you probably already have your own process, but I read/interpret the preceding ACI as follows:

This ACI “allows” a user to “write” to their own (“ldap:///self”) userPassword and mobile attributes “(targetattr=”userPassword||mobile”)

If you place this ACI on a particular user’s object (i.e. uid=bnelson, ou=people,dc=example,dc=com), then this ACI would only apply to this object.  If you place this ACI on a container of multiple user objects (i.e. ou=people,dc=example,dc=com), then this ACI would apply to all user objects included in this container.

 

Access Control Processing:

 

Access control instructions provide fine-grained control over what a given user or group member is authorized to do within the directory server.

When a directory-enabled client tries to perform an operation on any entry in the server, an access control list (ACL) is created for that particular entry. The ACL for any given entry consists of the entry being accessed as well as any parent entries all the way up to the root entry.

 

ACIDIT

 

The ACL is essentially the summation of all acis defined for the target(s) being accessed plus the acis for all parent entries all the way to the top of the tree.  Included in this list are any Global ACIs that may have been configured in the cn=config as well.  While not entirely mathematically accurate, the following formula provides an insight into how the ACL is generated.

 

Summation

 

Using the previous formula, the access control lists for each entry in the directory information tree would be as follows:

 

 

Once the ACL is created, the list is then processed to determine if the client is allowed to perform the operation or not.  ACLs are processed as follows:

  1. If there exists at least one explicit DENY rule that prevents a user from performing the requested action (i.e. deny(write)), then the user is denied.
  2. If there exists at least one explicit ALLOW rule that allows a user to perform the requested action (i.e. allow(write)), then the user is allowed (as long as there are no other DENY rules preventing this).
  3. If there are neither DENY nor ALLOW rules defined for the requested action, then the user is denied. This is referred to as the implicit deny.

Something to Think About…

smiley_confused Thought 1:  If in the absence of any access control instructions, the default is to deny access, then what is the purpose of access control instructions you might ask?  ACIs with ALLOW rules are used to grant a user permission to perform some action.  Without ALLOW ACIs, all actions are denied (due to the implicit deny rule).

Thought 2:  If the default is to implicitly deny a user, then what is the purpose of DENY rules?  DENY rules are used to revoke a previously granted permission.  For instance, suppose that you create an ALLOW rule for the Help Desk Admin group to access a user’s PII data in order to help determine the user’s identity for a password reset.  But you have a recently hired Help Desk Admin that has not completed the required sensitivity training.  You may elect to keep him in the Help Desk Admin group for other reasons, but revoke his ability to read users’ PII data until his training has been completed.Note:  You should use DENY rules sparingly.  If you are creating too many DENY rules you should question how you have created your ALLOW rules.

Thought 3:  If the absence of access control instructions means that everyone is denied, then how can we manage OpenDJ in the event that conflicting ACIs are introduced?  Or worse, ACIs are dropped altogether?  That is where the OpenDJ Super User and OpenDJ privileges come in.

 

OpenDJ’s Super User:

 

The RootDN user (“cn=Directory Manager” by default) is a special administrative user that can pretty much perform any action in OpenDJ.  This user account is permitted full access to directory server data and can perform almost any action in the directory service, itself.  Essentially, this account is similar to the root or Administrator accounts on UNIX and Windows systems, respectively.

If you look in the directory server you will find that there are no access control instruction granting the RootDN this unrestricted access; but there are however privileges that do so.

 

Privileges:

 

While access control instructions restrict access to directory data through LDAP operations, privileges define administrative tasks that may be performed by users within OpenDJ. Assignment of privileges to users (either directly or through groups) effectively allows those users the ability to perform the administrative tasks defined by those privileges.

The following table provides a list of common privileges and their relationship to the RootDN user.

 

DefaultACIs

 

The RootDN user is assigned these privileges by default and similar to Global ACIs, these privileges are defined and maintained in the OpenDJ configuration object.  The following is the default list of privileges associated with Root DN users (of which the Directory Manager account is a member).

 

dn: cn=Root DNs,cn=config

objectClass: ds-cfg-root-dn

objectClass: top

ds-cfg-default-root-privilege-name: bypass-lockdown

ds-cfg-default-root-privilege-name: bypass-acl

ds-cfg-default-root-privilege-name: modify-acl

ds-cfg-default-root-privilege-name: config-read

ds-cfg-default-root-privilege-name: config-write

ds-cfg-default-root-privilege-name: ldif-import

ds-cfg-default-root-privilege-name: ldif-export

ds-cfg-default-root-privilege-name: backend-backup

ds-cfg-default-root-privilege-name: backend-restore

ds-cfg-default-root-privilege-name: server-lockdown

ds-cfg-default-root-privilege-name: server-shutdown

ds-cfg-default-root-privilege-name: server-restart

ds-cfg-default-root-privilege-name: disconnect-client

ds-cfg-default-root-privilege-name: cancel-request

ds-cfg-default-root-privilege-name: password-reset

ds-cfg-default-root-privilege-name: update-schema

ds-cfg-default-root-privilege-name: privilege-change

ds-cfg-default-root-privilege-name: unindexed-search

ds-cfg-default-root-privilege-name: subentry-write

cn: Root DNs

 

This list can retrieved using the OpenDJ dsconfig command:

 

./dsconfig –h localhost –p 4444 –D “cn=directory manager” –w password get-root-dn-prop

 

with the ldapsearch command:

 

./ldapsearch –h hostname –p portnumber –D “cn=directory manager” –w “password” -b “cn=config” -s sub “cn=Root DNs” ds-cfg-default-root-privilege-name

 

or simply by opening the config.ldif file and locating the entry for the “cn=Root DNs” entry.

Most operations involving sensitive or administrative data require that a user has both the appropriate privilege(s) as well as certain access control instructions.  This allows you to configure authorization at a fine grained level – such as managing access control or resetting passwords.

Privileges are assigned to users and apply globally to the directory service.  Any user can be granted or denied any privilege and by default only the RootDN users are assigned a default set of privileges.

Note:  Consider creating different types of administrative groups in OpenDJ and assign the privileges and ACIs to those groups to define what a group member is allowed to do.  Adding users to that group then automatically grants those users the rights defined in the group and conversely, removing them from the group drops those privileges (unless they are granted through another group).

 

Effective Rights:

 

Once you set up a number of ACIs, you may find it difficult to understand how the resulting access control list is processed and ultimately the rights that a particular user may have.  Fortunately OpenDJ provides a method of evaluating the effective rights that a subject has on a given target.

You can use the ldapsearch command to determine the effective rights that a user has on one or more attributes on one or more entries.

$ ldapsearch –h localhost –p 1389 -D “cn=Directory Manager” -w password
-g “dn:uid=helpdeskadmin,ou=administrators, dc=example,dc=com” -b “uid=scarter,ou=people, dc=example,dc=com” -s base ‘(objectclass=*)’ ‘*’ aclrights

The preceding search is being performed by the Root DN user (“cn=Directory Manager”).  It is passing the –g option requesting the get effective rights control (to which the Directory Manager has the appropriate access configured). The command wants to determine what rights the Help Desk Administrator (uid=helpdeskadmin,…) has on Sam Carter’s entry (uid=scarter,…).  The scope of the search has been limited only to Sam Carter’s entry using the base parameter.  Finally, the search operation is returning not only the attributes, but the effective rights (aclrights) as well.

Possible results from a search operation such as this are as follows:

 

dn: uid=scarter,ou=People,dc=example,dc=com

objectClass: person

objectClass: top

uid: scarter

userPassword: {SSHA}iMgzz9mFA6qYtkhS0Z7bhQRnv2Ic8efqpctKDQ==

givenName: Sam

cn: Sam Carter

sn: Carter

mail: sam.carter@example.com

aclRights;attributeLevel;objectclass: search:1,read:1,compare:1,write:0,selfwrit

e_add:0,selfwrite_delete:0,proxy:0

aclRights;attributeLevel;uid: search:1,read:1,compare:1,write:0,selfwrite_add:0,

selfwrite_delete:0,proxy:0

aclRights;attributeLevel;userpassword: search:0,read:0,compare:0,write:1,selfwri

te_add:0,selfwrite_delete:0,proxy:0

aclRights;attributeLevel;givenname: search:1,read:1,compare:1,write:0,selfwrite_

add:0,selfwrite_delete:0,proxy:0

aclRights;attributeLevel;cn: search:1,read:1,compare:1,write:0,selfwrite_add:0,s

elfwrite_delete:0,proxy:0

aclRights;attributeLevel;sn: search:1,read:1,compare:1,write:0,selfwrite_add:0,s

elfwrite_delete:0,proxy:0

aclRights;attributeLevel;mail: search:1,read:1,compare:1,write:0,selfwrite_add:0

,selfwrite_delete:0,proxy:0

aclRights;entryLevel: add:0,delete:0,read:1,write:0,proxy:0

The search results contain not only the attributes/attribute values associated with Sam Carter’s object, but the effective rights that the Help Desk Admins have on those attributes.  For instance,

aclRights;attributeLevel;givenname: search:1,read:1,compare:1,write:0,selfwrite_

add:0,selfwrite_delete:0,proxy:0

 

The aclRights;attributeLevel;givenname notation indicate that this line includes the effective rights for the givenname attribute.  Individual permissions are listed that demonstrate the rights that the Help Desk Administrator has on this attribute for Sam Carter’s entry (1 = allowed and 0 = denied).

 

Recommendations:

 

An OpenDJ installation includes a set of default (Global) access control instructions which by some standards may be considered insecure.  For instance, there are five ACIs that allow an anonymous user the ability to read certain controls, extended operations, operational attributes, schema attributes, and user attributes.  The basic premise behind this is that ForgeRock wanted to provide an easy out-of-the-box evaluation of the product while at the same time providing a path forward for securing the product.  It is intended that OpenDJ should be hardened in order to meet a company’s security policies and in fact, one task that is typically performed before placing OpenDJ in production is to limit anonymous access.  There are two ways you can perform this:

  1. Enable the reject-unauthenticated-request property using the dsconfig command.
  2. Update the Global ACIs

Mark Craig provides a nice blog posting on how to turn off anonymous access using the dsconfig command.  You can find that blog here.  The other option is to simply change the reference in the Global ACIs from ldap:///anyone to ldap:///all.  This prevents anonymous users from gaining access to this information.

Note:  Use of ldap:///anyone in an ACI includes both authenticated and anonymous users – essentially, anyone.  Changing this to ldap:///all restricts the subject to all authenticated users.

The following comments from Ludo Poitou (ForgeRock’s OpenDJ Product Manager) should be considered before simply removing anonymous access.

You don’t want to remove the ACI rules for Anonymous access, you want to change it from granting access to anyone (ldap:///anyone) to granting access to all authenticated users (ldap:///all).

This said, there are some differences between fully rejecting unauthenticated requests and using ACI to control access. The former will block all access including the attempts to discover the server’s capabilities by reading the RootDSE. The later allows you to control which parts can be accessed anonymously, and which shouldn’t.

There’s been a lot of fuss around allowing anonymous access to a directory service. Some people are saying that features and naming context discovery is a threat to security, allowing malicious users to understand what the server contains and what security mechanisms are available and therefore not available. At the same time, it is important for generic purpose applications to understand how they can or must use the directory service before they actually authenticate to it.

Fortunately, OpenDJ has mechanisms that allow administrators to configure the directory services according to their security constraints, using either a simple flag to reject all unauthenticated requests, or by using ACIs.

A few other things to consider when configuring access control in OpenDJ include the following:

  1. Limit the number of Root DN user accounts

You should have one Root DN account and it should not be shared with multiple administrators.  Doing so makes it nearly impossible to determine the identity of the person who performed a configuration change or operation in OpenDJ.  Instead, make the password complex and store it in a password vault.

  1. Create a delegated administration environment

Now that you have limited the number of Root DN accounts, you need to create groups to allow users administrative rights in OpenDJ.  Users would then log in as themselves and perform operations against the directory server using their own account.  The tasks associated with this are as follows:

  1. Associate privileges and ACIs to users for fine grained access control

Now that you have create administrative groups, you are ultimately going to need to provide certain users with more rights than others.  You can create additional administrative groups, but what if you only need one user to have these rights.  Creating a group of one may or may not be advisable and may actually lead to group explosion (where you end up with more groups than you actually have users).  Instead, consider associating privileges to a particular user and then create ACIs based on that user.


GluuOSCON 2015 Access Management Workshop [Technorati links]

March 19, 2015 05:39 PM

OSCON_LOGO

Description

Centralizing authentication and access management can enable your domain to more quickly adapt to changing security requirements. This workshop will provide an overview of the Gluu Server, including the architecture, installation process, and configuration. The workshop will show howto centrally control access to API’s for a web or native client using the OpenID Connect and UMA profiles of OAuth2.

Abstract

There are significant security advantages to a domain offering centralized authentication and authorization APIs. If developers hard code security, updates to policies require re-building applications and sometimes regression testing. This can make it hard for an organization to quickly change policies to respond to threats.

Since the late 90’s, enterprise Identity and Access Management (“IAM”) suites have been available from large vendors like Oracle, CA, IBM and RSA. While individual FOSS components exist, assembling an equivalent IAM stack was difficult. In 2009, the Gluu Server set out to change this.

This workshop will guide the attendee on how to deploy a Gluu Server, using either the Centos or Ubuntu packages. It will also include a tutorial on how to configure applications to leverage the central authentication and authorization infrastructure. No programming is required for the basics—most of the examples will involve only linux system administration—but some advanced use cases will be presented to demonstrate how a programmer could call the API’s from a sample Python application. Also included will be an overview of authentication API’s, including SAML, OAuth2 and LDAP.

The goal is to de-mystify the art of Identity and Access management, and provide the attendee with the tools they need to improve application security at their organization.

March 18, 2015

Julian BondNow this is interesting. Small (and quite cheap) daughter boards for IPod Classics that let you run ... [Technorati links]

March 18, 2015 12:50 PM
Now this is interesting. Small (and quite cheap) daughter boards for IPod Classics that let you run CF Flash, SDXC cards and mSATA SSDs. This (for a price!) will let you build a solid state 1TB iPod Classic!

http://www.tarkan.info/

Hidden away in there is the gotcha for owners of the iPod Classic 6g/6.5g with the 80gb thin and 160gb thick. These used an unusual CE-ATA interface but this is just the ribbon cable. You can get and fit a very cheap alternate ribbon cable designed for the 5g, 5.5g and 7g iPods. This then let's you use the ATA disks. It's unconfirmed but it looks like this would let you fit the fat 240Gb disk in a 6.5g 160 iPod. The catch here is that the thin 80Gb and 160Gb fat iPod will only address 128Gb unless it's a hard drive. So while >128Gb hard drives may be possible, >128Gb solid state drives are not. You can however step round this limitation by using Rockbox to replace the firmware.

So while there are some new upgrade options, it still looks like you need to start with the very last 7g classic with the v2.0.5 firmware. None of this is cheap in terms of the storage. And you may still have to switch to Rockbox because even if the firmware will accept use of the disk space it may still bork at the quantity of tracks and metadata.
 Tarkan's BORED »
Tarkan's BORED : Welcome to the notice bored.

[from: Google+ Posts]

Kantara InitiativeKantara Initiative Awards SUNET CSP Trustmark Grant at Assurance Level 1 and 2 [Technorati links]

March 18, 2015 01:12 AM

PISCATAWAY, NJ–(March 17, 2015, 2015) – Kantara Initiative announces the Grant of Kantara Initiative Service Approval Trustmark to the SUNET Credential Service Provider (CSP) service operating at Level of Assurance 1 and 2. SUNET was assessed against the Identity Assurance Framework – Service Assessment Criteria (IAF-SAC) by Kantara’s European based Accredited Assessor, Europoint.

A global organization, Kantara Initiative Accredits Assessors, Approves Credential and Component Service Providers (CSPs) at Levels of Assurance 1, 2 and 3 to issue and manage trusted credentials for ICAM and industry Trust Framework ecosystems. The broad and unique cross section of industry and multi-jurisdictional stakeholders, within the Kantara Membership, have the opportunity to develop new Trust Frameworks and profiles of the core Identity Assurance Framework for applicability to their communities of trust.

The key benefits of participation Kantara Initiative Trust Framework Program participation include: rapid on boarding of partners and customers, interoperability of technical and policy deployments, an enhanced user experience, competition and collaboration with industry peers. The Kantara Initiative Trust Framework Program drives toward modular, agile, portable, and scalable assurance to connect business, governments, customers, and citizens. Join Kantara Initiative to participate in the leading edge of trusted identity innovation development.

“We are very proud to be the first identity provider in Europe to be granted the Kantara trust mark. This is definitely a mile stone for SUNET and even more so for our identity providing service eduID. We believe very strongly in investing in and following a joint vision when it comes to identity providing online, where Kantara is a steady rock. Our talented co-workers put a lot of effort into the certification process and have done so gladly. We must all, worldwide, share a joint vision. We are looking forward to keeping up the good work,” said Valter Nordh, in charge of identity federation and related issues at SUNET.

“We are glad to have had the opportunity to assess SUNET’s service eduID and to validate their successful efforts to comply with Kantara IAF. SUNET is committed to the efforts to work with and use industry accepted standards. Europoint are convinced that international standardization is the way to achieve identity assurance,”said Björn Sjöholm, CEO and Lead Auditor at Europoint.

“We congratulate SUNET´s on their grant of Kantara Trustmark as an Approved CSP operating at Assurance Level 1 and 2. Further we are delighted to welcome SUNET as our first European region Approved CSP. This achievement marks a major milestone for the internationalization of our program,”said Joni Brennan, Executive Director, Kantara Initiative.

For further information or to accelerate your business by becoming Kantara Accredited or Approved contact secretariat@kantarainitaitive.org

____________________________________________________________________________

About Kantara Initiative: Kantara Initiative connects business as an agile Identity Services market growth acceleration organization supporting trusted transactions technology and policy through our innovations, requirements development, compliance programs, and information sharing among leaders from: industry, research & education, government agencies, and international stakeholders. Join. Innovate. Trust. http://kantarainitiative.org

___________________________________________________________________________

About SUNET:

Since the 1980s SUNET provides Swedish universities, colleges and many other organizations and agencies with infrastructure for data communication. SUNET network is as fast, stable and secure throughout the whole countries. In addition to the actual network SUNET offers various services that facilitate members’ activities such as eduID, eduroam, services for e-meetings, project management, mail filters and much more. SUNET also contributes to important issues in the world of Internet and infrastructure – please read more on http://sunet.se 

Watch a short film about SUNET: https://play.sunet.se/media/Sunet+-+EN/1_xxq21198/25557561

____________________________________________________________________________

About Europoint:

Europoint is based in Sweden and offers consultancy services, specialist competence, and audits within the field of Information and IT security. Europoint is a Payment Card Industry (PCI) approved QSA and PA-QSA Company, and a PAN Nordic Card Association (PNC) third party auditor.

http://www.europoint.se

____________________________________________________________________________

March 17, 2015

KatasoftHow to build an app with AngularJS, Node.js and Stormpath in 15 minutes [Technorati links]

March 17, 2015 03:00 PM

AngularJS is a framework for building front-end (browser) applications, also known as “Single Page Apps” (SPAs), and we think it’s superb!

AngularJS makes it very easy to build a complex, responsive application, particularly to put a SPA on top of your API service. And once you have an app up, you want your users to be able to log in.

In this tutorial we will:

Here is a preview of what it will look like:

Registration Form

Login Form

User Profile View

Let’s get started!

Generate a Project With Yeoman, Bower and Grunt

If this is your first time using Yeoman, Bower and Grunt, you’re in for a treat! They make it very easy to “scaffold” or “generate” the boilerplate code for a fullstack AngularJS + Node.js project.

We’re going to use a specific Yeoman generator to generate the boilerplate for our fullstack project. The generated project will use Bower to manage the front-end dependencies, and Grunt for performing tasks like rebuilding your code as you edit it.

Assuming you already have Node.js on your system, you can install these tools with these commands:

$ npm install -g grunt-cli
$ npm install -g yo
$ npm install -g bower
$ npm install -g generator-angular-fullstack

Once that’s done you need to create a project directory and change directories into it:

$ mkdir my-angular-project && cd $_

Now for the fun part: running the generator. Kick it off with this command:

$ yo angular-fullstack dashboard-app

We are calling our app “dashboard-app”, because it’s going to be a dashboard for users who visit our website. The generator will ask you several questions, such as which templating engine to use. We’re sticking to vanilla HTML/CSS/JS for this guide, the only opinionated choice we are making is to the the 3rd-party UI Router instead of Angular’s default $route service. Here are the choices that we made:

# Client

? What would you like to write scripts with? JavaScript
? What would you like to write markup with? HTML
? What would you like to write stylesheets with? CSS
? What Angular router would you like to use? uiRouter
? Would you like to include Bootstrap? Yes
? Would you like to include UI Bootstrap? No

# Server

? Would you like to use mongoDB with Mongoose for data modeling? No

Assuming everything installs OK you should now have the default project in place. Use this Grunt command to start the development server and see the application:

$ grunt serve

It should automatically open this page in your browser:

Sweet! We’re got an AngularJS application, we are ready to add the user components to it. Now would be a good time to start using Git with your project, you can stop the server by pressing Ctrl+C – then use these git commands:

$ git init
$ git add .
$ git commit -m "Begin dashboard app project"

Sign Up With Stormpath

If you don’t have a Stormpath account, head over to https://api.stormpath.com/register and sign up for one – it’s really easy and our Developer plan is always free. We’ll send you an email verification message, click on the link and then login to the Admin Console at https://api.stormpath.com/login

Stormpath – once installed in your app – will automate a lot of the user functionality, interactions and views so you don’t have to build it.

Create an API Key Pair

Once you’ve logged into the Stormpath Admin Console, click the “Manage API Keys” button on the homepage. This will generate a new API key pair and prompt you to download it as a file. Keep this file safe and secure, we will need it in the next section.

Get Your Application HREF

Stormpath allows you to provision any number of “Applications”. An “Application” is just Stormpath’s term for a project. One is created for you automatically when you sign up. You can find this in the “Applications” section of the Admin Console, and if you click into it you will see it’s HREF – you will need this in the next section.

The general rule is that you should create one Application per website (or project). Since we’re just starting you can use the default application that was created for you, called “My Application”.

Add Stormpath to your API Server

The generator created an Express.js server for us, it serves a list of things at /api/things (you saw this when you ran grunt serve for the first time, they were listed on the home page of the application). We will use Stormpath to secure this simple API.

We need to give your Stormpath information to Express. Locate the file server/config/local.env.js and add these properties to the export block, but using your information:

module.exports = {
  DOMAIN: 'http://localhost:9000',
  SESSION_SECRET: "dashboard-secret",
  // Control debug level for modules using visionmedia/debug
  DEBUG: '',
  STORMPATH_API_KEY_ID: 'YOUR_KEY_ID',
  STORMPATH_API_KEY_SECRET: 'YOUR_KEY_SECRET',
  STORMPATH_APP_HREF: 'YOUR_APP_HREF'
};

Upgrade Express.js

Run this command to make sure that you have the latest version of Express.js:

$ npm i --save express@latest

Configure the Middleware

Now we can configure the Stormpath Express SDK and use it to secure the API. Find the file server/routes.js, we’re going to make some modifications to it. This is the main file used by Express to configure the endpoints that are available in our API.

To begin you need to require the Stormpath Express SDK, add this line to the top of the file:

var stormpathExpressSdk = require('stormpath-sdk-express');

Now you can create an instance of the Stormpath middleware, add this line before the module.exports statement:

var spMiddleware = stormpathExpressSdk.createMiddleware();

We want to attach the Stormpath middleware to the Express application. Add this line inside the module.exports block, before any other app statements:

spMiddleware.attachDefaults(app);

This will attach a handful of necessary endpoints, such as the endpoint that will accept POST requests from the login form. This is all handled under the hood by the SDK, nothing more for you to do here!

Last but most importantly: we want to protect the things API. Modify that line to use the Stormpath authenticate middleware:

app.use('/api/things', spMiddleware.authenticate, require('./api/thing'));

Add Stormpath To The AngularJS Application

Since we are using Bower to manage dependencies for our AngularJS application, run this command in your project folder to install the Stormpath AngularJS SDK:

$ bower install --save stormpath-sdk-angularjs

When you use third-party modules in AngularJS you need to declare them as dependencies of your main application module. We need to declare the Stormpath Angular SDK as a dependency of our application.

Open the file client/app/app.js and modify the module list to look like so:

angular.module('dashboardAppApp', [
  'ngCookies',
  'ngResource',
  'ngSanitize',
  'ui.router',
  'stormpath',
  'stormpath.templates'
])

Note that that we also included the Stormpath templates module, this module provides the default templates that we will be using in this tutorial. Want to write your own templates? No problem! All of the Stormpath directives can take an optional template. See the Stormpath AngularJS SDK API Documentation for more information.

Configure the UI Router Integration

While AngularJS does provide a routing system, via the $route service, we’ve decided to use the UI Router module for this application. It’s very well featured and the norm for complex AngularJS applications.

In the same app.js file you want to add this run block, place it below the .config block (make sure you move the semicolon from the config block to the run block):

.run(function($stormpath){
  $stormpath.uiRouter({
    loginState: 'login',
    defaultPostLoginState: 'main'
  });
});

This tells the SDK to do the following:

This completes our initial configuration of the SDK, in the next section we will add some links to the menu so that we can access the new views we will be creating.

Customize the Menu

In the next few sections we will create views for a login form, a user registration form, and a user profile page. We’ll need some clickable links that we can use to get to those pages, so we’ll create those links first.

Open the file client/components/navbar/navbar.html and replace the <ul> section with this markup:

<ul class="nav navbar-nav">
  <li ng-repeat="item in menu" ng-class="{active: isActive(item.link)}">
      <a ng-href="{{item.link}}">{{item.title}}</a>
  </li>
  <li if-user ng-class="{active: isActive('/profile')}">
      <a ng-href="/profile">Profile</a>
  </li>
  <li if-not-user ng-class="{active: isActive('/register')}">
      <a ui-sref="register">Register</a>
  </li>
  <li if-not-user ng-class="{active: isActive('/login')}">
      <a ui-sref="login">Login</a>
  </li>
  <li if-user ng-class="{active: isActive('/logout')}">
      <a ui-sref="main" logout>Logout</a>
  </li>
</ul>

We’ve retained the piece that iterates over the default links, but we also added the new links that we want. We are using the ifUser and ifNotUser directives to control the visibility of the links.

You can reload the app to see these changes, but the links won’t work until we complete the next few sections.

Create the Registration Form

We want users to register for accounts on our site. The Stormpath AngularJS SDK provides a built-in registration form that we can just drop in. When it’s used, Stormpath will create a new account in the Stormpath Directory that is associated with the Stormpath Application that you are using.

Generate the /register Route

When you want to create a new route and view in an Angular application, there are a handful of files that you have to make. Thankfully the generator is going to help us with this.

Stop the server and run this command in your project folder:

$ yo angular-fullstack:route register

It will ask you some questions, just hit enter to choose the defaults for all of them. Now start the server and click the link to the Register page, you will see the default view that was created:

This is the default template that was created by the generator. In the next section we will edit that template and use the default registration form that Stormpath provides.

Use the Registration Form Directive

AngularJS has a component called a “directive”. They can look like custom HTML elements, or they can look like attributes that you add to existing elements. In either case you are “mixing in” some functionality that another module provides.

The Stormpath AngularJS SDK provides a directive which inserts the default registration page into the given element. Open the file client/app/register/register.html and then replace it’s contents with this (notice where the directive is):

<div ng-include="'components/navbar/navbar.html'"></div>

<div class="container">
  <div class="row">
    <div class="col-xs-12">
      <h3>Registration</h3>
      <hr>
    </div>
  </div>
  <div sp-registration-form post-login-state="main"></div>
</div>

This is a small bit of HTML markup which does the following:

Save that file and the browser should auto reload, you should now see the registration route like this:

You can now register for your application, git it a try! After you register you will be prompted to login, but the link won’t work yet – we’re going to build the Login form in the next section.

Stormpath also provides email verification features: we can send a link that the user must click on before their account is activated. We discuss this in detail in the Stormpath AngularJS Guide, see the Email Verification Directive section.

Create the Login Form

Just like the registration form, Stormpath also provides a default login form that you can easily include in your application. Like we did with the Registration page, we will use the generator to create the login route and view.

Stop the server and run this command in your project folder, pressing enter to choose all the default options when it asks:

$ yo angular-fullstack:route login

Use the Login Form Directive

Open the file client/app/login/login.html and then replace it’s contents with this:

<div ng-include="'components/navbar/navbar.html'"></div>

<div class="container">
  <div class="row">
    <div class="col-xs-12">
      <h3>Login</h3>
      <hr>
    </div>
  </div>
  <div sp-login-form></div>
</div>

This is a small bit of HTML markup which does the following:

Save that file and the browser should auto reload, you should now see the login route like this:

At this point you should be able to login to your application! It will simply redirect you back to the main view, which isn’t very exciting. In the next section we’ll build a bare-bones user profile view that will show you the details about your user object.

You will also notice that the Login form has a link to a password reset flow. If you would like to implement this, please see the Password Reset Flow section of the Stormpath AngularJS Guide

Creating a User Profile View

Most user-centric applications have a “Profile” view, a place where the user can view and modify their basic profile information. We’ll end this tutorial with a very basic profile view: a simple display of the JSON object that is the Stormpath user object.

Generate the /profile Route

Alright, one more time! We’re going to use the generator to scaffold the files for us:

$ yo angular-fullstack:route profile

Force Authentication

The user must be logged in if they want to see their profile, otherwise there is nothing to show! We want to prevent users from accessing this page if they are not logged in. We do that by defining the SpStateConfig on the UI state for this route.

Open the file client/app/profile/profile.js and modify the state configuration to include the Stormpath state configuration:

.state('profile', {
  url: '/profile',
  templateUrl: 'app/profile/profile.html',
  controller: 'ProfileCtrl',
  sp: {
    authenticate: true
  }
});

Build the View

We declared authentication for this state, this guarantees that the user will always be logged in by the time that this view loads (they are redirected to the login page otherwise).

With that assumption we can build our template without annoying switches or waits. The Stormpath AngularJS SDK will automatically assign the current user object to user on the Root Scope, so it will always be available in your templates.

Open the file client/app/profile/profile.html and then replace it’s contents with this:

<div ng-include="'components/navbar/navbar.html'"></div>

<div class="container">
  <div class="row">
    <div class="col-xs-12">
      <h3>My Profile</h3>
      <hr>
    </div>
  </div>

  <div class="row">
    <div class="col-xs-12">
      <pre ng-bind="user | json"></pre>
    </div>
  </div>
</div>

Just like the other pages, we’ve included our common menu and setup some basic Bootstrap classes. The <pre> block will leverage Angular’s built-in JSON filter to show the user object.

Try it out! Make sure you’re logged in, then click the Profile link. You should now see your user data:

Conclusion.. But Wait, There’s More!

This tutorial leaves you with a new AngularJS application, with a registration and login form for your users. If you want to add email verification or password reset, please refer to the complete Stormpath AngularJS Guide.

We are developing our AngularJS SDK very quickly, and new features are being added all the time! We suggest that you subscribe to the Stormpath AngularJS SDK on Github, and follow the complete guide for updates.

Here is a short list of upcoming features:

With that.. I hope that you’ll check back soon, and happy coding!

-Robert

IS4UVisual C#: RSA encryption using certificate [Technorati links]

March 17, 2015 01:38 PM

Intro

RSA is a well-known cryptosystem using assymetric encryption. It performs encryption using a public key, decryption using a private key. The private key should be protected. The most efficient way of managing these keys in a Windows environment is by using certificates. To protect the private key, you should make it not exportable. This way the private key is only available on the machine it is being used.

Create certificate

Request certificate from CA

When enrolling for a certificate, make sure that the template has the Legacy Cryptographic Service Provider selected. Otherwise .Net will not be able to use the certificate. It will crash with this exception:
Unhandled Exception: System.Security.Cryptography.CryptographicException: Invalid provider type specified.

rsa_template

Generate self-signed certifiate

Windows Server 2012 R2 provides a cmdlet, New-SelfSignedCertificate, to generate a certificate. However, this cmdlet does not provide sufficient parameters to generate a certificate that can be used in C#. I used following script to generate my self-signed certificate, New-SelfsignedCertificateEx:
This script is an enhanced open-source PowerShell implementation of deprecated makecert.exe tool and utilizes the most modern certificate API — CertEnroll.
In my search for the correct value for the ProviderName parameter, I came across the interface of IX509PrivateKey, which provides a LegacyCsp boolean flag. I added following $PrivateKey.LegacyCsp = $true in #region Private Key [line 327]. After this addition, following powershell command gave me a certificate I could use to perform RSA encryption and decryption:
param($certName)
Import-Module .\New-SelfSignedCertificateEx.psm1
New-SelfsignedCertificateEx -Subject "CN=$certName" 
  -KeyUsage "KeyEncipherment, DigitalSignature" 
  -StoreLocation "LocalMachine" -KeyLength 4096

Grant access to private key

The account(s) that will perform the decryption requires read access to the private key of the certificate. To configure this, open a management console (mmc). Add the certificates snap-in for the local computer. In the certificate store, right click the certificate, go to all tasks and click Manage Private Keys. Add the account and select Read. Apply the changes.
rsa_privKey
Alternatively, you can script the process using an extra module to find the private key location:
param($certName, $user)
Import-Module .\Get-PrivateKeyPath.psm1
$privateKeyPath = Get-PrivateKeyPath CN=$certName -StoreName My 
    -StoreScope LocalMachine
& icacls.exe $privateKeyPath /grant ("{0}:R" -f $user)

Export public key

The certificate with public key can be published and/or transported to a partner that needs to send sensitive data. To export the certificate with public key execute following script:
param($certName)
$Thumbprint = (Get-ChildItem -Path Cert:\LocalMachine\My 
  | Where-Object {$_.Subject -match "CN=$certName"}).Thumbprint;
Export-Certificate -FilePath "C:\certName.crt" 
  -Cert Cert:\LocalMachine\My\$Thumbprint
 
Do not forget to refresh the certificate keys on a regular basis.

Load certificate

Following code snippet shows how to locate and load the certificate.
using System;
using System.Security.Cryptography.X509Certificates;
 
private X509Certificate2 getCertificate(string certificateName)
{
    X509Store my = new X509Store(StoreName.My, StoreLocation.LocalMachine);
    my.Open(OpenFlags.ReadOnly);
    X509Certificate2Collection collection = 
    my.Certificates.Find(X509FindType.FindBySubjectName, certificateName, false);
    if (collection.Count == 1)
    {
        return collection[0];
    }
    else if (collection.Count > 1)
    {
        throw new Exception(string.Format("More than one certificate with name
   '{0}' found in store LocalMachine/My.", certificateName));
    }
    else
    {
        throw new Exception(string.Format("Certificate '{0}' not found 
  in store LocalMachine/My.", certificateName));
    }
}

Encryption

Following code snippet shows how to encrypt the input using a certificate.
using System;
using System.Security.Cryptography;
using System.Security.Cryptography.X509Certificates;
using System.Text;
 
private string EncryptRsa(string input)
{
       string output = string.Empty;
       X509Certificate2 cert = getCertificate(certificateName);
       using (RSACryptoServiceProvider csp = (RSACryptoServiceProvider)cert.PublicKey.Key)
       {
              byte[] bytesData = Encoding.UTF8.GetBytes(input);
              byte[] bytesEncrypted = csp.Encrypt(bytesData, false);
              output = Convert.ToBase64String(bytesEncrypted);
       }
       return output;
}

Decryption

Following code snippet shows how to decrypt the input using a certificate. Make sure that the account running this has read access to the private key.
using System;
using System.Security.Cryptography;
using System.Security.Cryptography.X509Certificates;
using System.Text;
 
private string decryptRsa(string encrypted)
{
    string text = string.Empty;
    X509Certificate2 cert = getCertificate(certificateName);
    using (RSACryptoServiceProvider csp = (RSACryptoServiceProvider)cert.PrivateKey)
    {
           byte[] bytesEncrypted = Convert.FromBase64String(encrypted);
           byte[] bytesDecrypted = csp.Decrypt(bytesEncrypted, false);
           text = Encoding.UTF8.GetString(bytesDecrypted);
    }
    return text;
}

References

Radovan Semančík - nLightHow to Get Rich by Working on Open Source Project? [Technorati links]

March 17, 2015 10:50 AM

There was a nice little event in Bratislava called Open Source Weekend. It was organized by Slovak Society for Open Information Technologies. It is quite a long time since I had a public talk therefore I've decided that this a good opportunity to change that. Therefore I had quite an unusual presentation for this kind of event. The title was: How to Get Rich by Working on Open Source Project?.

This was really an unusual talk for the audience that is used to talks about Linux hacking and Python scripting. It was also unusual talk for me as I still consider myself to be an engineer and not an entrepreneur. But it went very well. For all of you that could not attend here are the slides.

OSS Weekend photo

The bottom line is that it is very unlikely to ever get really rich by working on open source software. I also believe that the usual "startup" method of funding based on venture capital is not very suitable for open source projects (I have written about this before). Self-funded approach looks like it is much more appropriate.

(Reposted from https://www.evolveum.com/get-rich-working-open-source-project/)

Julian Bond#WeLoveTheNHS  Don't let them destroy it. And pay attention to NHS promises in the coming UK election... [Technorati links]

March 17, 2015 07:47 AM
#WeLoveTheNHS  Don't let them destroy it. And pay attention to NHS promises in the coming UK election.

http://www.theguardian.com/society/2015/mar/15/labour-and-tories-refuse-to-commit-to-doctors-nhs-funding-plea

http://www.theguardian.com/society/2015/mar/15/nhs-needs-extra-8bn-to-survive
http://www.theguardian.com/society/2015/mar/12/nhs-agrees-largest-ever-privatisation-deal-to-tackle-backlog
 Labour and Tories refuse to commit to doctors' £8bn NHS funding plea »
Alliance of senior doctors warn that future of health service will be at risk without extra £8bn a year, with the Lib Dems the only party to make any such pledge

[from: Google+ Posts]

Kaliya Hamlin - Identity WomanEllo….on the inside [Technorati links]

March 17, 2015 12:24 AM

So. I FINALLY got my invitation to Ello.

I go in…make an account.

I check the Analytics section.

Ello uses an anonymized version of Google Analytics to gather and aggregate general information about user behavior. Google may use this information for the purpose of evaluating your use of the site, compiling reports on site activity for us and providing other services relating to site activity and internet usage. Google may also transfer this information to third parties where required to do so by law, or where such third parties process the information on Google’s behalf. To the best of our knowledge, the information gathered by Google on Ello’s behalf is collected in such a way that neither Ello, nor Google, can easily trace saved information back to any individual user.

Ello is unique in that we offer our users the option to opt-out of Google Analytics on the user settings page. We also respect “Do Not Track” browser settings. On your Ello settings page, you can choose to turn Google Analytics off completely when you visit the Site. If you choose either of these options, we make best efforts not to send any data about your user behavior, anonymized or otherwise, to Google or any other third party service provider. Please be aware that there may be other services that you are using and that are not controlled by Ello (including Google, Google Chrome Web Browser, Android Operating System, and YouTube) that may continue to send information to Google when you use the Site, even if you have asked us not to send information through our services.

Not sure what to make of all this.

March 15, 2015

Rakesh RadhakrishnanThreat IN based AuthN controls, Admission controls and Access Controls [Technorati links]

March 15, 2015 04:02 AM
For large enterprises evaluating next generation Threat Intelligence (Incident of Compromise detection tools) platforms such as Fireye, Fidelis and Sourcefire, one of the KEY evaluation criteria is how much of this Threat Intelligence generated can act as Actionable Intelligence. This requires extensive integration of the Threat IN platform with;
Although integrating security systems together for cross control co-ordination is very COOL ! All these are ONE OFF integration.  Threat IN standards such as TAXII, STIX and CybOX - allow for XML based expression of an "incident of compromise" STIX and secure straight through integration with TAXII. Since dozens of vendors have started expressing AC policies in XACML from IBM Guardium, to Nextlabs DLP and FireLayer Cloud Data Controller, to Layer7 (XML/API firewalls) and Queralt (PAC/LAC firewalls), it ONLY natural to expect STIX profile support XACML (hopefully an effort from OASIS in 2015). The extensibility of XACML, allows for expression of ACL, RBAC, ABAC, RiskADAC and TBAC all in XACML and the policy combination algorithms in XACML can easily extend to "deny override" when it comes to real time Threat Intelligence. This approach will allow enterprise to capture Threat IN and implement custom policies based on the IN in XACML without having one off integration and vendor LOCK IN ! Similar to the approach proposed by OASIS here. Its good to see many vendors supporting STIX - including Splunk, Tripwire and many more.


Why do we need such Integrated Defense?

Simply having Breach Detection capabilities will not suffice, we need both prevention and tolerance as well.

One threat use case model can include every possible technology put to use; for example clone-able JSONBOTS (json over XMPP) infused from Fast-flux Domains, using Trojan Zebra style (parts of the code shipped randomly), assembled at a pre-determined time (leveraging Zero Day vulnerability) along with a Commander BOT, appearing in the server side components of the network, making Secure Indirect Object References (the opposite of IDOR) as they have been capturing the indirect object names needed (over time with BigData Bots already - reconnaissance), to ex-filtrate sensitive data in seconds, and going dormant in minutes. One malware use case - that is a Bot/BotNet, C&C, ZeroDay, TorjanZebra and an APT that leverages Big Data (all combined).

You don't know what hit you (at the application layer as it is distributed code via distributed injections that was cloned and went dormant in seconds), where it came from (not traceable to IP addresses or domains FFD), how it entered (trojan zebra), when it came alive (zero day), how persistent it can be (cloning), what path it took (distributed) and what data it stole?

This is the reality of today ! Quite difficult to catch in a VEE (virtual execution environment) as well. What is needed is a combination of advanced Threat Detection (threat intelligence and threat analytic) combined with responsive/dynamic Threat Prevention systems (threat IN based access controls including format preserving encryption and fully homogenized encryption ) and short lived, stateless, self cleansing ( http://www.scitlabs.com/en/ ) Threat Tolerant systems, as well (which can include web containers and app containers). Scitlabs like technology used for continually maintaining High Integrity network security services (rebooting from a trusted image) is very critical, to ensure that the preventive controls at the Data Layer will work (consistent, cohesive and co-ordinated data object centric policies in DLP, DB FW and Data Tokenization engines).

If the Data is the crown jewel the thief's are after imagine - U know your community gates are breached and you have a warning - you would lock down your house and ship that "pricey diamond" via a tunnel to the east coast ! wont you...  Even in case the thief barges through your front door gets to the safebox and breaks it open ONLY to find the diamond GONE ! (that's intrusion tolerant designs). And fortunately in the digital world that's quite possible - Data Center is identified with a IOC by a Fireeye or Fidelis, instantly the AC policies for AuthN to End point to App and Data Changes, while at the same time the sensitive data (replicated to an UN breached DR site) hosting DBMS is quiesced and data is purged.

Dynamic Defensive Designs #1

Rakesh RadhakrishnanPrivileged Access & Physical Access Alignment [Technorati links]

March 15, 2015 03:17 AM
One of the main stages of an attack by Cyber Criminals - after phishing and setting up backdoor is privilege escalation. Essentially stealing privileged administrative credentials (systems admin, network admin, db admins or a security admin), to move on to next stages of data gathering and ex filtrating.  

Unlike in the past wherein Physical Proximity to a Data Center is a pre-cursor for logical access as a privileged admin, remote connectivity to perform all admin tasks are the norm today. These are the 5 things that can be done - so that a breached end point (via spear phishing/phising attacks) and a breached network (back door setup) does not allow for lateral movement or privilege escalation;

a) leverage strong authentication (bio metric, device and OTP) via multi-factor authentication protocols such as FIDO, and then associate privileged access to a user (traceable to the end user)
b) ensure privileged access to a database or an OS does not mean - access to sensitive data or sensitive credentials is possible (both credentials token-ization and data token-ization is quite common today)
c) even though the privileged user is remote leverage the correlation of physical (proximity context) access controls with privileged access (such a the queralt demo for DHS and its xacml support)
d) ensure all admin access is fine grained with XACML - such as DB Admin access is controlled at the command level with a DB Firewall that supports XACML), such as a OS Admin is controlled at the comman level with an OS firewall that support XACML, etc.
e) ensure all privileged access policies expressed in XACML as suggested by NIST are dynamic -  threat intelligence, aware (so intrusion tolerance is possible for credential data and other sensitive data).

The capability to correlate physical AC to privileged Access with Queralt like system will ensure that only your team of network Admin located in a remote city in Bangalore are capable of doing privileged admin duties, as many proximity context can be taken into account, other than mobile device location. Make privilege escalation impossible.

Dynamic Defensive Designs #3

Rakesh RadhakrishnanAn Architecture to Address the "Achilles heel" issue [Technorati links]

March 15, 2015 03:17 AM
Too many Data Breach stories.. recently (including Anthem).. The recent Cloud Security Alliance event with IAPP - focused on Data Breaches. From my perspective we can address this issue with a set of simple (yet profound) design strategies:

1st step: Do not collect security sensitive data - when you do not need it. There is no enterprise business process that would require the persistent storage of a SS# for example, in all enterprises - except the Social Security Office. HR and Payroll processes that require SS# can work with a token-ized representation of the SS# (format preserving encryption) and have such processes make the linkage to the actual SS# on the other side. For example - PCI transactions are offloaded to a payment processing services (with token-ized representation of CC#) and need NOT be persistently stored, what is being transmitted is also FPE'd. Make this your standard Enterprise Data Collection POLICY -so that you do not have the headache of protecting sensitive data in teh 1st place.

2nd step: Identify all PII, PCI, PHI and SOX - security sensitive data repositories (using Qualys or Tripwire like VM tools) and ensure that tokenzined representations are stored (data can stay encrypted end to end with FPE and FHE (full homomorphic encryption ). In addition to a static analysis use a DLP system, Cloud Data Token-ization engine and a DAM/DB Firewall  - to discover all security sensitive data (at end point with DLP, in the clouds (private and public) and in DB repositories (including Hadoop). 90% of the data discovery should be known by Application Owners and Data Owners in the org, the rest 10% can be discovered.

3rd step: Maintain clearly defined meta data about these security sensitive data (TAGS - PII, PCI, PHI, EHR, SOX, and more) and attempt to consolidate all security sensitive data into a Centralized Repository - like Oracle 12c - with a number of built in security controls (such as NO visibility even for a privileged DBA). This subset of data is probably less that 1% of all enterprise data "the crown jewels". All security sensitive data can remain in the enterprise private cloud implementation (with full DR replication into another private cloud). Typical enterprise I know are deploying a Data Center consolidation effort with cloud technologies, private clouds that leverages the security advantages of abstractions between Hypervisor and VM (Hypervisor with a NIC for a private control network and VM's isolated to a NIC to the rest of the network - giving no possibilities for reaching into VM or HV, by web apps). Using Data Defined Storage (a big data technology) ensure - these TAG'd data ONLY remain within the private cloud (they are FPE or FHE'd when they are move to the cloud or used by the end point).

4th step: Using the meta data defined author the TAG based XACML policies that are embeddable with the DB object as an XML file (so policies traverse with data) - remember while a Cloud Data token-ization system performs data control functions such as FPE and FHE, the control functions performed by a DB FW is different (data masking, data redaction and more)., while the DLP system performs data encryption at end point (even while data is captured by UI). Also remember similar to the SAML BAE (back channel approach). XACML as a XML artifact can traverse to a PEP via back-channels too and over UDP as well (XML over XMPP like).

5th step - The DB firewall is very critical as it is protecting the environment where this sensitive data is being processed (decrypted only in memory) and ensure your enterprise data is protected from DBMS boot time to run time to quiesce time and replication time and more (with NO SQL traversing allowed for this data set- all SQL is blocked - ONLY stored procedures as objects defined in the firewall called via a secure indirect object referrals in the data abstraction layers are made by the pen tested application). Both the DLP and Cloud Data token-ization engines are to be working only with token-ized and encrypted data. Now you have the following - you know what you need to protect, you have the intelligence needed to protect (meta data - 99 percent of tagging is done at design  time - only 1% can be dynamic Tagging typically ), you have consistent written and enforceable policies (see paper on NPL to MPL to DPL) that can be deployed with the data any where in the world (in the consolidated data center, in a LAN, in the cloud or the end point). Now finally ensure that these TAG'd policies are responsive to additional external Intelligence (like the Risk intelligence from a Secureonix or a Threat Intelligence from a Fireeye) - similar to the Fireeye Imperva integration and ensure both the Application Containers and the Web Application Firewalls between the end point and this data is protected with SCIT like self cleansing approach. Also ensure that Identity is in the STACK (mufti-factor authenticated /strong authenticated user - end user and privileged user - identity is embedded end to end for traceability and forensics - from SAML/XACML aware APP to SAML/XACML aware OS/VM firewall and DB Firewall. Similar to the F5 and Guardium integration.

This approach to data protection for High Risk or Sensitive Data will ensure that your enterprise data is protected, come what may as it takes the approach of detection, prevention and tolerance for data breaches.


Dynamic Defensive Designs #2




March 13, 2015

GluuGluu’s Unconventional Business Model [Technorati links]

March 13, 2015 05:32 PM

Gluu Business Model Diagram

Since hearing about the book Business Model Generation at the SXSW V2V conference, I’ve been interested in perfecting this diagram for Gluu.

Hopefully the diagram is pretty self-explanatory, but I’d like to offer a few highlights. Having a freemium part of your business model can be confusing–in our case its the free open source version of the Gluu Server, and the fact that we don’t license the source code. It took us a long time to make this work for us. But the hard work is paying off, and now I am pretty convinced that our freemium offering enables Gluu to leverage crowd sourcing to our competitive advantage.

Gluu needs more than open source enthusiasts to make the Gluu Server the world’s leading access management platform. Not surprisingly, we actually need money to fund developers and to answer all those community support questions. To meet this challenge, we’ve been on a mission to make Gluu into a more conventional software company: (1) we sell a high price, personal support option to large organizations; (2) we will introduce a low priced, automated self-support version of the Gluu Server to medium and small sized organizations (and start-ups).

However, don’t worry! Gluu is still pretty unconventional. Here are some things that may surprise you about Gluu.

  1. No Salesforce Companies do call us for information about the Gluu Server, and while we are happy to schedule a demo, or have an overview conversation about the functionality of the Gluu Server, we rely on our integration partners to prepare a quote and manage the sales process. Some people think this is crazy. But almost every organization on the planet needs better access management, so why bother calling a bunch of companies who aren’t ready for our product?
  2. Global Team Gluu has team members from Ukraine, India, Bangladesh, Russia, Indonesia, Bolivia, Argentina, China and Norway. Their commitment and contribution is what makes Gluu possible. I think it’s a competitive necessity to hire the best people, where ever they live. It’s a diverse team of many languages, nationalities, religions and cultures who share a common vision for free open source security software that will make the Internet a safer place.
  3. No VC funding Over the years we’ve talked to several venture capitalists, but none were a good fit for Gluu. Maybe one day this will change, but we’re not holding our breath. Perhaps Gluu will go straight to IPO–it happens more than you think. Alex Mandel, founder of the Funding Post, cites research from the Kaufman Foundation that in a given year, up to 2/3 of companies that IPO did not raise venture capital. There is a strong argument that venture capitalists are bad at identifying truly innovative companies, because they are looking to repeat successful business models, not invent new ones (see above). Whatever the reason, as opposed to many of our competitors who took tens of millions of dollars in venture capital, Gluu’s plan is to fund the business by making a profit.
  4. Austin, TX Let’s face it. 70% of technology investments are concentrated in companies that are located in the San Francisco, Boston, and New York metro areas. Austin has a bigger reputation than economy and entrepreneurial infrastructure. However, I happen to like living here, and I think my family enjoys a better quality of life than we would residing in or around one of the big cities. And we have SXSW. The company would probably be better off in the Bay Area, but we’re in it for the long term, and quality of life considerations outweigh the business disadvantages.
  5. No Office With a global workforce, there is no need for a big office. Will and I work out of WeWork, which is like a hotel for entrepreneurs. But while its probably true that communication breaks down after ten feet, Gluu uses tools like Skype, Gotomeeting, and Github to make up for our missing preverbal water cooler.
  6. One founder Most tech companies have three to four founders–a tech guy, a finance guy, a marketing guy, an operations guy. At Gluu, I was the only founder, and I ran the business without any help until William Lowe joined the management team in 2012. Over time, as the business model allows, we will grow the management team. However right now, there are very few filters between the customers, the Gluu team and the CEO. This keeps us agile and responsive, and I think enables us to have a real voice in the community, versus the sterilized output of some overly conservative marketing campaign.

 

So now you hopefully have an idea of what makes Gluu tick. This is still a work-in-progress, so stay tuned for further tweaks!

Julian BondSo farewell then, Zero the Hero. [Technorati links]

March 13, 2015 03:44 PM
So farewell then, Zero the Hero.

http://www.tinymixtapes.com/news/rip-daevid-allen-founder-of-gong

Proud to have seen you perform at 
- Hammersmith Palais
- Watchfield Festival
- Glastonbury Festival (Twice)
- Cambridge Corn Exchange
- And numerous times in my head.

You Never Blow Your Trip Forever. 

http://www.lyricsmania.com/you_never_blow_your_trip_forever_lyrics_gong.html
 RIP: Daevid Allen, founder of Gong | Music News | Etc | Tiny Mix Tapes »
From The Guardian:
Daevid Allen, the leader of the legendary prog-jazz eccentrics Gong, has died aged 77. The news was confirmed on the Facebook page of Allen’s son, Orlando Monday Allen.
[…]
Last month, Allen announced he had been given six months to live, after cancer for which he had previously been treated had spread to his lung. “I am not interested in endless surgical operations and in fact it has come as a relief to know that the end is in...

[from: Google+ Posts]

IS4UFIM2010: Protect passwords in configuration files [Technorati links]

March 13, 2015 03:01 PM

Intro

One of the great features of FIM is that it is relatively easy to plugin custom functionality. You can extend the synchronization engine by developing rules extension and you can add custom workflows to the FIM portal. Rules extensions run under the FIM synchronization service account, workflows under the FIM service service account. This article describes an approach to enable communication to external systems (eg Exchange). Because you typically do not grant a service account rights to Microsoft Exchange, you need the ability to run part of your code using different credentials.

Encrypt password

You do not want to have passwords in clear text in configuration files or source code. That is where encryption comes into play. Encryption can be handled in a myriad of different ways. The method described here uses powershell cmdlets, which keeps it quite simple and understandable. So, how do we convert plain text to something more secure that cannot be read by anyone who happens to have read access to the files? Following two powershell commands are the answer:
$secureString = ConvertTo-SecureString -AsPlainText 
   -Force -String $pwd
$text = ConvertFrom-SecureString $secureString
The cmdlet ConvertTo-SecureString creates a secure string object from the password stored in the variable $pwd. The $text variable is a textual representation of the secure string and looks something like this: 01000000d08c9ddf0115d1118c7a00 c04fc297eb01000000c6c2de88df86 70438e7ac64054c971490000000002 000000000003660000c00000001000 0000103f2b826467c9fdaff555bc06 4b4da50000000004800000a0000000 1000000006673bf0a8cd2463ec9f9f d1a911dfc30800000076d2a3f9c110 2b6e1e0000006452d271a75a5df3a6 00f0f7cb45c18df98d3aae

Decrypt password

PowerShell

To decrypt the password in powershell, use the ConvertTo-SecureString cmdlet:
ConvertTo-SecureString $text
The output of this cmdlet is a secure string object which can be used to build a PSCredential object.

C#

To perform decryption in C# you need to add a reference to System.Management.Automation in your project. To export the dll to the current directory execute following powershell cmd:
Copy ([PSObject].Assembly.Location) .
Following code shows how to use the PowerShell library to construct a PSCredential object. The PSCredential object can then be used to perform management operations on Exchange.
using System.Collections.ObjectModel;
using System.Management.Automation;
using System.Security;
 
private string PWD = "01000000d08c9ddf0115d1118c7
a00c04fc297eb01000000c6c2de88df8670438e7ac64054c9
71490000000002000000000003660000c0000000100000001
03f2b826467c9fdaff555bc064b4da50000000004800000a0
0000001000000006673bf0a8cd2463ec9f9fd1a911dfc3080
0000076d2a3f9c1102b6e1e0000006452d271a75a5df3a600
f0f7cb45c18df98d3aae";
private string USER = @"is4u\sa-ms-exch";
 
private PSCredential getPowershellCredential()
{
  PSCredential powershellCredential;
 string powershellUsername = USER;
 SecureString pwd = getPowershellPassword(PWD);
 if (pwd != null)
 {
  powershellCredential = new 
       PSCredential(powershellUsername, pwd);
 }
 else
 {
  throw new Exception("Password is invalid");
 }
  return powershellCredential;
}
 
private SecureString getPowershellPassword(string encryptedPwd)
{
 SecureString pwd = null;
 using (PowerShell powershell = PowerShell.Create())
 {
  powershell.AddCommand("ConvertTo-SecureString");
  powershell.AddParameter("String", encryptedPwd);
  Collection<PSObject> results = powershell.Invoke();
  if (results.Count > 0)
  {
   PSObject result = results[0];
   pwd = (SecureString)result.BaseObject;
  }
 }
 return pwd;
}

NetworkCredential

If your operation requires you to connect using a network credential instead of a PSCredential object, this is very easy. You can get the corresponding NetworkCredential object from the PSCredential.
using System.Net;
 
private NetworkCredential getNetworkCredential()
{
  NetworkCredential cred = getPowershellCredential().GetNetworkCredential();
  return cred;
}

Security aspects

Only the user account that encrypted the password can decrypt it (because Kerberos keys are used under the hood). This is illustrated in following screenshot. If you look carefully, you can check I did not cheat on the parameter. secureString

Conclusion

The approach described here is simple, quick and secure. You need to run a few PowerShell commands and you can store passwords securely in your configuration files. Make sure to encrypt the required password using the service account that will be performing the decryption. Note that this technique is not limited to use in FIM deployments. You can use this technique in any .Net/Windows context.

References

Scripting Guy - Decrypt PowerShell Secure String Password

IS4UFIM on Azure [Technorati links]

March 13, 2015 02:39 PM
While deploying your Forefront Identity Manager labs in your own local virtual environment is convenient, it does consume a lot of your precious disk drive space and there is no questioning the impact of hardware failure. So why not move your virtualization layer to the cloud and let Azure take care of the storage, networking and compute infrastructure for you? This post will go over the steps we took in order to successfully automate the deployment of our FIM lab environments on the Microsoft Azure platform.

Azure infrastructure fundamentals

In order to create domain joined environments in Azure, there are four components we need:

1. An affinity group
Having our resources deployed in the same region (data center) is a fair option, but there is no certainty these resources are also located in the same cluster within that data center. Using affinity groups, we can define a container in which all our virtual machines are physically placed close together. This improves latency, performance and thereby cost.
2. A cloud service
This component is responsible for hosting our virtual machines. It gets assigned a public IP address, making it possible for you to connect to your environment from any location using your own defined endpoints.
3. A virtual network
In a domain joined setup it is necessary your machines can talk to each other. Using a VPN, we make sure VM's are deployed in the same IP range. These VM's can be assigned a static internal IP address which makes it possible to define your domain controller as the DNS server for the virtual network.
4. A storage container
Each deployment gets its own container to host their virtual hard disks (VHD's) under a storage account which is linked to the subscription of the deployment's cloud service.


Setup process

The whole process is executed using the Azure library module for PowerShell. First we authenticate our Azure account and select the current subscription we wish to use for our setup. We then assign a valid storage account to this subscription.

Combining the infrastructure elements we can set up our Azure environment in which our lab will be deployed. We first check and create a valid cloud service name in an affinity group. Then we retrieve the xml configuration. In this configuration we insert our new VPN, and DNS server to use. The address space for this virtual network will be 10.0.0.0/16, with a DNS server referenced to the IP address of the domain controller. Finally we create a new storage container for this lab. And that's it, our environment is ready for deployment.

Next up we will provision our servers. We have preconfigured each server up to the point of domain joining and captured a generalized VHD of this state. These VHD's are stored as images on our storage account so we can use them over and over again as a base machine for each server. Each machine has it's own parametrised configuration consisting of the VM name, size, location of the VHD, image to use and endpoints to assign for both RDP and remote PowerShell. We assign it the correct static IP address and subnet name defined in the VPN configuration. The next step depends on the type of server, there are 2 ways of defining the provisioning configuration.

We start with the AD server, which will not include any domain parameters. We define our provisioning parameters just like we would provision a standalone machine. When the machine is booted up, we run a remote script through the PowerShell endpoint and promote it to a domain controller. And voila, we created a domain within our VPN.

Next up we provision the other type (non-AD) of servers to the domain. Sadly, it is currently not supported to send parallel requests on the same cloud service using the Azure API. Instead, and because the domain provisioning configuration is the same for all the other servers, we can create the instance for each server and send them all together in one creation bulk request. This is as close as it gets to parallel provisioning of multiple servers in the same cloud service.

After the domain provisioning step has been executed, we automate the configuration of each server using PowerShell scripts which run on the PowerShell endpoint defined for that server.

Faster deployments?

Depending on the chosen server setup, this process can take quite a while to complete (mainly the installation of software during configuration).A basic setup consists of an AD / SQL / FIM server and takes about half an hour to complete in it's most basic configuration. Available optional servers include Exchange, BHOLD, and SCSM (both management server and data warehouse) which takes the total server count up to 7 servers. A full setup takes a lot longer and for this reason, we implemented a complementary way of setting up our labs. We started from a fully working lab setup and captured the VM's as specialized images as opposed to generalized in the previous method. This way you can use these images in a new azure environment in a slightly different provisioning configuration and have your basic deployment set up in less than 6 minutes! The drawback of this, is that the lab is not completely configurable as it uses a saved state (snapshot) of a previous domain joined setup.

The lazy way

Having a PowerShell cmdlets library at our disposal is really nice, although running these commands ourselves is not really what we were after. So we made our own WPF application to create an interface which will invoke the underlying PowerShell scripts (both the Azure module locally and configuration of servers remotely) in C# using a PowerShell class. During setup, passwords for all the (service) accounts are generated and stored in an xml file which is compliant for import to KeePass. The RDP configuration is automatically generated in an RDG file which can be opened with RDCManager (http://www.microsoft.com/en-us/download/details.aspx?id=44989).

Let’s round up with some visual material!

Account setup:




Environment setup:





Redeploy a default lab with a limited set of parameters:




Fully customize your deployment parameters for each server:




Start a full installation or manually proceed with each step:

IS4UFIM 2010: Event driven scheduling [Technorati links]

March 13, 2015 07:59 AM

Intro

In a previous post I described how I implemented a windows service for scheduling Forefront Identity Manager.

Since then, me and my colleagues used it in every FIM project. For one project I was asked if it was possible to trigger the synchronization "on demand". A specific trigger for a synchronization cycle for example, was the creation of a user in the FIM portal. After some brainstorming and Googling, we came up with a solution.

Trigger

We asked ourselves following question: "Is it possible to send a signal to our existing Windows service to start a synchronization cycle?". All the functionality for scheduling was already there, so it seemed reasonable to investigate and explore this option. As it turns out, it is possible to send a signal to a Windows service and the implementation turned out to be very simple (and simple is good, right?).

In addition to the scheduling on predefined moments defined in the job configuration file, which is implemented through the Quartz framework, we started an extra thread:

while (true)
{
 if (scheduler.GetCurrentlyExecutingJobs().Count == 0 
  && !paused)
 {
  scheduler.PauseAll();
  if (DateTime.Compare(StartSignal, LastEndTime) > 0)
  {
   running = true;
   StartSignal = DateTime.Now;
   LastEndTime = StartSignal;
   SchedulerConfig schedulerConfig = 
      new SchedulerConfig(runConfigurationFile);
   if (schedulerConfig != null)
   {
     schedulerConfig.RunOnDemand();
   }
   else
   {
    logger.Error("Scheduler configuration not found.");
    throw new JobExecutionException
        ("Scheduler configuration not found.");
   }
   running = false;
  }
  scheduler.ResumeAll();
 }
 // 5 second delay
 Thread.Sleep(5000);
}

StartSignal

First thing it does is check if one of the time-triggered schedules is not running and the service is not paused. Then it checks to see if an on-demand trigger was received by checking the StartSignal timestamp. So as you can see, the StartSignal timestamp is the one controlling the action. If the service receives a signal to start a synchronization schedule, it simply sets the StartSignal parameter:

protected override void OnCustomCommand(int command)
{
 if (command == ONDEMAND)
 {
  StartSignal = DateTime.Now;
 }
}

If you want to know more about developing custom activities, this article is a good starting point.

The first thing it does next if a signal was received, is pause the time-triggered mechanism. If the synchronization cycle finishes the time-triggered scheduling is resumed. The beautiful thing about this way of working is that the two separate mechanisms work alongside each other. The time-triggered schedule is not fired if an on-demand schedule is running and vice versa. If a signal was sent during a period of time the service was paused, the on-demand schedule will fire as soon as the service is resumed. The StartSignal timestamp will take care of that.

StartSync

So, how do you send a signal to this service, you ask? This is also fairly straightforward. I implemented the FIM portal scenario I described above by implementing a custom C# workflow with a single code activity:

using System.ServiceProcess;
 
private const int OnDemand = 234;
 
private void startSync(){
 ServiceController is4uScheduler = 
  new ServiceController("IS4UFIMScheduler");
 is4uScheduler.ExecuteCommand(OnDemand);
}

If you want to know more about developing custom activities, this article is a good starting point.
The integer value is arbitrary. You only need to make sure you send the same value as is defined in the service source code. The ServiceController takes the system name of the Windows service.

Powershell

The same is possible in Powershell:

[System.Reflection.Assembly]::Load("System.ServiceProcess, 
  Version=2.0.0.0, Culture=neutral, 
  PublicKeyToken=b03f5f7f11d50a3a")
$is4uScheduler = New-Object System.ServiceProcess.ServiceController
$is4uScheduler.Name = "IS4UFIMScheduler"
$is4uScheduler.ExecuteCommand(234)

Another extension I implemented (inspired by Dave Nesbitt's question on my previous post) was the delay step. This kind of step allows you to insert a window of time between two management agent runs. This in addition to the default delay, which is inserted between every step. So now there are four kind of steps possible in the run configuration file: LinearSequence, ParallelSequence, ManagementAgent and Delay. I saw the same idea being implemented in powershell here.

A very usefull function I didn't mention in my previous post, but was already there, is the cleanup of the run history (which can become very big in a fast-synchronizing FIM deployment). This function can be enabled by setting the option "ClearRunHistory" to true and setting the number of days in the "KeepHistory" option. If you enable this option, you need to make sure the service account running the service is a member of the FIM Sync Admins security group. If you do not use this option, membership of the FIM Sync Operators group is sufficient.

To end I would like to give you pointers to some other existing schedulers for FIM:
FIM 2010: How to Automate Sync Engine Run Profile Execution

IS4UFIM 2010: Eliminating equal precedence [Technorati links]

March 13, 2015 07:50 AM

Intro

Precedence can be tricky in certain scenarios. Imagine you want to make FIM master for a given attribute, but you need an initial flow from another data source. A good example is the LDAP distinguished name. If you have a rule that builds the DN automatically based on a base DN and one or more attribute values, the object is provisioned with the correct DN on export. But when you want to visualize this DN in the FIM portal, you need to be able to flow it back. If FIM is master over the distinguished name attribute, this flow will be skipped "Not precedent".

So you have to consider the option of using equal precedence, since manual precedence is not possible in combination with the FIM MA. But equal precedence is dependent on the synchronization cycle order: "the last one to write the attribute wins". Therefore it is not an option if FIM needs to be the absolute master of the DN attribute and you want to make sure that it always has the value you expect it to have.

Single valued attribute

The solution I came up with to work around this issue involves using two separate metaverse attributes. The flow is illustrated by following table.

Datasource Metaverse FIM Portal
ds_attr mv_attr1 fim_attr
mv_attr2

By using two metaverse attributes, both requirements are satisfied:

Multi valued attribute

We tried to apply this solution for multi valued attributes as well. A well known attribute that fits this use case is proxyAddresses. Initially, some exchange attributes are set in AD, such as mailNickName and homeMdb. Exchange generates some proxyAddresses based on defined rules. These aliases need to be available in FIM if FIM is used to manage this information.

To our surprise, the solution did not work in this case. After some investigation, the explanation was simple. The two metaverse attributes were not equal, which resulted in unexpected values after two or more synchronization cycles.

  1. Delta import delta sync FIM MA: fim_attr flows to mv_attr2
  2. Export AD MA: mv_attr2 flows to ds_attr
  3. Delta import delta sync AD MA: ds_attr flows to mv_attr1

The third step is expected, but does not update the entire value of mv_attr1 (key here is "entire"). The delta import delta sync step checks only changed attributes (and for multi valued attributes only changed entries). The value of ds_attr was just exported, so FIM compares its value with the originating metaverse attribute, which is mv_attr2. Since the values of mv_attr2 and ds_attr match, the export is successfully confirmed. But the value of mv_attr1 remains unchanged and is different from mv_attr2. In the next synchronization cycle, the value of mv_attr1 will be synchronized to the value of fim_attr, which results in an unwanted value.

If full synchronizations are used, everything works as expected because all entries in the multi valued attribute are taken into consideration. On a delta sync, only the changed fields are evaluated. We applied an advanced import flow to allow the flow of addresses generated by Exchange for newly create mailboxes.

if (csentry["ProxyAddresses"].IsPresent 
  && mventry["ProxyAddresses"].Values.Count == 0)
{
  mventry["ProxyAddresses"].Values = 
    csentry["ProxyAddresses"].Values;
}

Summary

The proposed configuration allows two-way updates while enforcing precedence for one data source. However, it does not work for multi valued attributes using delta synchronizations.

IS4UFIM 2010: SSPR with one-way trust [Technorati links]

March 13, 2015 07:50 AM

Intro

This article describes and documents an SSPR setup between two AD forests with a one-way trust. FIM is deployed in the internal domain is4u.be. Users from the domain dmz.be are being imported and managed by FIM. There is a one-way incoming trust on the dmz.be domain. All prerequisites from the password reset deployment guide are already satisfied.

DMZ connector configuration

SSPR requires that the DMZ connector service account has local logon rights on the FIM synchronization server. If the service account is from the DMZ domain, a two-way trust is required to allow this setting. Since this is not a valid option in this scenario, a service account from the IS4U domain needs to be delegated the proper rights on the DMZ domain. This includes at least the following:
  1. Replicating directory access
  2. Reset password

WMI verification

The configuration as is was tested and worked, but after a week, following the same scenario resulted in the following error:
An error has occurred. Please try again, and if the problem persists, contact your help desk or system administrator. (Error 3000)
Following up on the error, the event viewer gave following info:
Password Reset Activity could not find Mv record for user
This is a very clear error message indicating a problem with the WMI permissions. Checking up on this resulted in the conclusion that the permissions were set correctly. Lookups for accounts in the IS4U domain worked, but lookups for accounts in the DMZ domain failed.

Finding the PDC

Going back to the event viewer, we were given another clue:
DsGetDCName failed with 1355
A bit of researching learned us that the SetPassword call of SSPR always calls DsGetDCName because SSPR needs to find and target the PDC (domain controller with the PDC emulator role). This call seems to fail. We tried getting more info by running this specific call via nltest nltest /dsgetdc:dmz /netbios, but failed with following message:
Getting DC name failed: Status = 1355 0x54b ERROR_NO_SUCH_DOMAIN
However, resolving the FQDN using nltest /dsgetdc:dmz.be /netbios succeeded. And, even more strange, retrying to resolve using the netbios name did work! Some googling pointed to caching of certain information, which explained why the netbios lookup works after the FQDN lookup and why the initial configuration worked and then broke a week later.

WINS

NetBIOS recognizes domain controllers by the [1C] service record registration, but we could not find the correct WINS configuration, maybe because of the one-way trust.

Solution

The solution involved changing the advanced IP configuration settings. By adding the is4u.be and be suffixes the DsGetDCName call is enforced to always resolve the FQDN by searching for dmz.be instead of dmz.

References

IS4UFIM2010: Filter objects on export [Technorati links]

March 13, 2015 07:49 AM

Intro

FIM allows you to filter objects on import through filters in the connector configuration. The same functionality is not available on export. There are two methods available to provision a selected set of objects to a target system through synchronization rules. This article shortly describes these two mechanisms and also describes a third using provisioning code.

Synchronization Rules

Synchronization rules allow codeless provisioning. It also allows you control over the population of objects you want to create in a certain target system.

Triplet

The first way of doing this is by defining a set of objects, a synchronization rule, a workflow that adds the synchronization rule to an object and a Management Policy Rule (MPR) that binds them together. In the set definition you can define filters. You can select a limited population of objects by configuring the correct filter on the set. triplet

Scoping filter

The second method defines the filter directly on the synchronization rule, so you do not need a set, workflow and MPR. You simply define the conditions the target population needs to satisfy before they can be provisioned to the target system. outbound system scoping filter Scope filter

Coded provisioning

Coded provisioning allows for very complex provisioning and it is also the only option on projects where you use only the Synchronization Engine. What follows is only a portion of a more complex provisioning strategy:

Sample configuration file

<Configuration>
  <MaConfiguration Name="AD MA">
    <Export-Filters>
      <Filter Name="DepartmentFilter" IsActive="true">
        <Condition Attribute="Department" Operation="Equals" IsActive="true">Sales</Condition>
      </Filter>
    </Export-Filters>
  <MaConfiguration>
</Configuration>

Sample source code

Following code is on itself not functional, but you get an idea of how the complete implementation can look like:
private bool checkFilter(MVEntry mventry, Filter filter)
{
  foreach (FilterCondition condition in filter.Conditions)
    {
      // Return false if one of the conditions is not true.
      if (!checkCondition(mventry, condition))
      {
        return false;
      }
  }
  return true;
}

 

private bool checkCondition(MVEntry mventry, FilterCondition condition)
{
  string attributeValue = condition.Attribute;
  if (mventry[attributeValue].IsPresent)
  {
    if (mventry[attributeValue].IsMultivalued)
    {
      foreach (Value value in mventry[attributeValue].Values)
      {
        bool? result = 
          condition.Operation.Evaluate(value.ToString());
        if (result.HasValue)
        {
          return result.Value;
        }
      }
      return condition.Operation.DefaultValue;
    }
    else
    {
      bool? result = condition.Operation.Evaluate(mventry[attributeValue].Value.ToString());
      if (result.HasValue)
      {
        return result.Value;
      }
      return condition.Operation.DefaultValue;
    }
  }
  return condition.Operation.DefaultValue;
}

IS4UFIM2010: GUI for configuring your scheduler [Technorati links]

March 13, 2015 07:49 AM

Intro

I described in previous posts how I developed a windows service to schedule FIM. The configuration of this scheduler consists of XML files. Because it is not straightforward to ensure you have a consistent configuration that satisfies your needs, I developed an interface to help with the configuration. The tool itself is built using the WPF framework (.NET 4.5) and has following requirements:
gui_browseNote that it is possible to use the tool on any server or workstation. After saving your changes you can transfer the configuration files to your FIM server.

Configure triggers

The first tab, job configuration, allows you to add, delete, rename and configure triggers. Each trigger specifies a run configuration and on which schedule this run configuration will be fired. A typical delta schedule for FIM is "each 10 minutes during working hours". This can be translated to the cron expression "0 0/10 8-18 * * ?". The drop down list with run configurations is automatically propagated based on the existing run configurations. Save config performs a validation of the cron expression using the job_scheduling_data_2_0.xsd schema file. If valid, JobConfiguration.xml is saved. A backup of the previous configuration is saved as well. Reset config reloads the interface using the configuration in the file on disk.

Configure global parameters

The second tab, run configuration, offers three tabs to configure the RunConfiguration.xml file. The first of these tabs, global config, allows you to configure some global parameters:
Save config performs a validation using the RunSchedulingData.xsd schema file. If valid, RunConfiguration.xml is saved. This includes the settings of all three tabs under run configuration. A backup of the previous configuration is saved as well. Reset config reloads the interface using the configuration in the file on disk. gui_params

Configure run configurations

The run configurations tab allows you to add, delete, rename and edit run configurations. Editing run configurations comes down to two things:

Default run profile

The default run profile is the top most action. Steps that do not have an action defined, take the action defined by their parent. This mechanism allows you to reuse sequences in combination with different profiles. You could have a run profile with default run profile "Full import full sync" and another with "Delta import delta sync". Then both of them could use the same sequence resulting in different actions. This mechanism only works if using a naming convention for run profiles in all connectors in the FIM Synchronization Engine. Run profile names are case sensitive. If the scheduler tries to start a run profile that does not exist, the management agent will not be run. In the example here, the sequence Default will be run with run profile "Delta import delta sync". gui_runconfig

Steps

The add step button opens a new dialog where you can select the type of step. Because the server export info is read, a list of possible actions is available. However, as explained above, you do not need to specify an action. gui_addStepgui_addStepMa

Configure sequences

The sequences tab allows you to add, delete, rename and edit sequences. The functionality provided here is identical to the one on the run configurations tab. Whether the sequence is executed as a linear or parallel sequence is defined by the step that calls the sequence, so a sequence can be defined as linear in one run configuration (or other sequence) and as parallel somewhere else. gui_sequence

Download

You can find the new release of the IS4U FIM scheduler on GitHub: FIM-Scheduler - release 1.3. The setup that installs the scheduler on the FIM server now also includes the GUI tool to configure it.
March 12, 2015

Radovan Semančík - nLightHow Precise are the Analysts? [Technorati links]

March 12, 2015 10:48 AM

Industry analysts produce their studies and fancy charts for decades. There is no doubt that some of them are quite influential. But have you ever wondered how are the results of these studies produced? Do the results actually reflect reality? How are the positions of individual products in the charts determined? Are the methodologies based on subjective assessments that are easy to influence? Or are there objective data behind it?

Answers to these questions are not easy. Methodologies of industry analysts seem to be something like trade secrets. They are not public. They are not open to broad review and scrutiny. Therefore there is no way how to check the methodology by looking "inside" and analyzing the algorithm. So, let's have a look from the "outside". Let's compare the results of proprietary analyst studies with a similar study that is completely open.

But it is tricky to make a completely open study of commercial products. Some product licenses explicitly prohibit evaluation. Other products are almost incomprehensible. Therefore we have decided to analyze open source products instead. These are completely open and there are no obstacles to evaluate them in depth. Open source is mainstream for many years and numerous open source products are market leaders. Therefore this can provide a reasonably good representative sample.

As our domain of expertise is Identity Management (IDM) we have conducted a study of IDM products. And here are the results of IDM product feature comparison in a fancy chart:

We have taken a great care to make a very detailed analysis of each product. We have a very high confidence in these data. The study is completely open and therefore anyone can repeat it and check the results. But these are still data based on feature assessment done by several human beings. Even though we have tried hard to be as objective as possible this can still be slightly biased and inaccurate ...

Let's take it one level higher. Let's base the second part of the study on automated analysis of the project source code. These are open source products. All the dirty secrets of software vendors are there in the code for anyone to see. Therefore we have analyzed the structure of source code and also the development history of each product. These data are not based on glossy marketing brochures. These are hard data taken from the actual code of the actual system that the customers are going to deploy. We have compiled the results into a familiar graphical form:

Now, please take the latest study of your favorite industry analyst and compare the results. What do you see? I leave the conclusion of this post to the reader. However I cannot resist the temptation to comment that the results are pretty obvious.

But what to do about this? Is our study correct? We believe that it is. And you can check that yourself. Or have we done some mistake and the truth is closer to what the analysts say? We simple do not know because the analysts keep their methodologies secret. Therefore I have a challenge for all the analysts: open up your methodologies. Publish your algorithms, data and your detailed explanation of the assessment. Exactly as we did. Be transparent. Only then we can see who is right and who is wrong.

(Reposted from https://www.evolveum.com/analysts/)
March 11, 2015

Nat Sakimura[個人情報保護法改正] 匿名加工情報と第三者提供記録について(3/13 8:50改定) [Technorati links]

March 11, 2015 11:29 PM

個人情報保護法改正案が火曜日に閣議決定されて公開されています[1]。

これに関して、いくつか問題点が指摘されています。今日は、そのうちの

1. 匿名加工情報の規制対象範囲が適切に設定されていない件
2. 第三者提供の記録義務がかかる範囲が適切に設定されていない件

2点を取り上げます。

1. 匿名加工情報の規制範囲が適切に設定されていない件

匿名加工情報についての問題点を、高木先生が自宅の日記「匿名加工情報の規定ぶりが生煮えでマズい事態に(パーソナルデータ保護法制の行方 その15)」で指摘されておられます[2]。一言で言えば、

(1) 匿名加工化は一部情報削除、仮名化を含む極めて広い概念(第2条9)

(2) これをデータベース化して事業に用いているものを匿名加工情報取扱事業者と呼ぶ (第2条10)←ほとんど誰でもになることに注意

(3) 匿名加工情報DBを作るときには、

(a) 個人情報保護委員会規則で定める基準に従い、当該個人情報を加工しなければならない。(第36条1)

(b) 匿名加工情報を作成したときは、個人情報保護委員会規則で定めるところにより、当該匿名加工情報に含まれる個人に関する情報の項目を公表しなければならない。(第36条3)

となっていて、企業で自社内で個人情報を分析につかったりなど極めて幅広い使い方に対して「個人情報保護委員会規則に定める方式」でやらなければならないことになってしまっているというものです[3]。これは単なるバグだと思われるので、当然直されるんだと思います。ですよね!あと、こういうバグを生む原因のプロセスもわりと明らかなので、そちらもちゃんと直していただきたいものです。

第三者提供の記録義務がかかる範囲が適切に設定されていない件…

[1] http://www.cas.go.jp/jp/houan/189.html

[2] http://takagi-hiromitsu.jp/diary/20150310.html#p01

[3] これ、某所で高木先生に指摘されて初めて気付いた。見る目が無いので、個人情報保護委員会とかで働くのには私は失格ですな。

March 10, 2015

Ian GlazerFAQ for Building a Presentation [Technorati links]

March 10, 2015 10:19 PM

I’ve been collecting questions I get about my thoughts on how to build a presentation.  Here are, in no particular order, some of the top ones and my answers.

Does this work for every kind of presentation?

Hell no! It works well, for me, for keynotes. It works well for building talks that are presentation, performances.

It will not work well for lectures and workshops. It will not work well if what you actually need is documentation. See Tufte on that one.

How long does this take?

Start to finish it takes me between 40 and 80 hours to build a complete 20-minute keynote. I can’t tell if that is too much or too little time.

But in the end, it doesn’t matter. Think building a presentation like building an animated movie. It takes hours upon hours to build just one frame.

Can I do this?

Hell yes! If you have clarity of what you want to communicate and if you have empathy for your audience, you can do this. Do not let anyone tell you otherwise.

I tried this and ended up with 150 slides for a 15-minute talk. Did I do something wrong?

Nope. You nailed.

Who ever tells you that you need X many slides for a Y minute long presentation either has never delivered a good talk or is just a formulaic drone who hasn’t had an original thought since Clinton’s first term.

Let me be clear – if you are delivering a keynote, do not let anyone dictate a number of slides. You wrote the speech. You know the words. You know how many slides you need. They know f@#! all.

My company has a slide template. Should I use it?

That depends.

I always adhere to corporate color palettes. Why? Because someone more trained than I selected those colors, and I respect their expertise.

I tend to adhere to corporate slide layouts… except when I don’t. Again my approach works for keynotes. If you are delivering a product roadmap talk, this will not work. If you are giving a technical workshop involving hands-on work, do not you this method.

What if I get scared?

This is really two questions in one.

What if you don’t get scared?

It means you don’t give a shit. It means that you care about the delivery. It means you don’t care about the audience.

You should be scared or, more accurately, nervous. But do not let that paralyze you. Wrap yourself in the knowledge that what you have to say must be said and you are best person to say it. Know that not everyone will love the talk – that’s not the goal. Get through to just 1 person in the audience and you have triumphed.

What if I freeze up on stage?

You won’t if you:

You won’t get lost. You won’t bog down. Just don’t try and read your speaker notes – that will sink you for sure.

What if it’s a big room?

The bigger the better. The bigger the room the less of it you can see. With the lights you’ll only really see the first two rows or so. Even if you think the people you can see don’t like you or your message, remember that they want you to succeed. No one wants to see the person on stage fail, freak out, or freeze, because they project themselves on to the speaker, on to you. Just keep having a conversation with those first two rows of people.

What if it’s a small room?

No worries. Remember the speech is the core of what you want to communicate. You are having chat over coffee with your audience.

But remember, never talk at the audience. You are talking with the audience: you speak; they send body language messages back to you.

How do I get better at presenting?

Through better preparation and through presenting more.

Who should I watch to learn from?

Don’t start with other speakers. Don’t start with TED. Find a small venue in your hometown and go watch live music or stand-up comedians. Watch people who are comfortable in their own skin, standing under a light, vulnerable, and there to entertain you.

My friend Bob wrote about another place you can learn how to present – watching street (or in this case MARTA) preachers.

Okay fine, but what presentations should I watch?

Start by watching other people in your industry and in your interest groups. Don’t watch “professional” speakers; they are far too polished. Don’t watch politicians; they are far too optimize for speaking in sound bites. Watch the genuine beginners; the ones who just gush honest deep love for what they are speaking about.

I was told no more than 3 bullets per slide with no more than 6 words per slide. Why aren’t there more guidelines like that?

I think a lot of those so called presentation rules were made by people who printed actual transparencies (foils) and delivered them on an overhead projector. The spirit of this decree is to avoid too much text on a slide. But don’t worry, you already did that by limiting yourself to one thought per slide. Remember, you are not building a document. You are building a presentation. You are delivering a performance. Shakespeare did okay without bullets.

What’s the most important thing I can do to improve my delivery?

Rehearse. Out loud. Not to anyone else. Only you get to hear your own voice and there is nothing more difficult than that. Rehearse not to memorize each word.

The aim is not a robotic recitation. The aim is to completely internalize the spirit of the speech to the point that seeing any slide at the deck at random triggers you to spout out the spirit of what that slide is meant to say. This is equivalent to musicians learning scales so that they can solo.

Any books to read to get more info?

Mike Jones - MicrosoftHTTP-Based OpenID Connect Logout Spec [Technorati links]

March 10, 2015 10:17 PM

OpenID logoA new HTTP-Based OpenID Connect Logout spec has been published at http://openid.net/specs/openid-connect-logout-1_0.html. This can coexist with or be used instead of the current HTML postMessage-based Session Management Spec.

The abstract for the new spec states:

This specification defines an HTTP-based logout mechanism that does not need an OpenID Provider iframe on Relying Party pages. Other protocols have used HTTP GETs to RP URLs that clear cookies and then return a hidden image or iframe content to achieve this. This specification does the same thing. It also reuses the RP-initiated logout functionality specified in Section 5 of OpenID Connect Session Management 1.0 (RP-Initiated Logout).

Special thanks to Brian Campbell, Torsten Lodderstedt, and John Bradley for their insights that led to some of the decisions in the spec.

GluuInbound SAML and inter-domain LDAP Sync [Technorati links]

March 10, 2015 07:39 PM

traditional-origami-pigeon

A couple questions recently came up from an organization that is considering using the Gluu Server as their central access management infrastructure. Since many organizations face similar identity and access management challenges, we decided to make the questions and our responses public.

Question 1: Can I use Gluu CAS server to serve internal apps?

Yes. The Gluu Server CE installation includes an option to deploy a CAS server. We have a simple design: both Shibboleth and CAS call oxAuth for authentication. That way, a custom authentication script can be leveraged for both CAS and SAML applications (and OpenID Connect applications).

Question 2: Can the Gluu Server support a CAS protected application, accessible via a specific url, that should delegate authentication to a SAML IdP hosted by an external partner?

Yes…

You’d have to use the SAML custom authentication script in the Gluu Server:
https://github.com/GluuFederation/oxAuth/tree/master/Server/integrations/saml

You may want to consider deploying the Asimba SAML proxy as in this diagram. This is an especially good idea if the you have multiple partners who want to use their SAML server for authentication. Then you can map each URL to a backend IDP in one XML file.

See also our press release about Asimba:

http://www.gluu.co/asimba-pr

Question 3: Can the Gluu server synchronize user information from the LDAP directories of my partners or customers via cache refresh?

Yes, but this would never happen in practice. LDAP is almost never Internet facing.

The Gluu Server acts as a meta-directory, and consolidates identity information from numerous (internal) authoritative backends. You don’t want to dynamically generate attributes because it kills the performance of the IDP, and complicates the IAM deployment. Consolidating multiple internal LDAP severs (doing a big JOIN…) is supported in the Gluu Server. You can map attribute names, transform the values, or even create new attributes. If there is an RDBMS, Gluu recommends using the Radiant Logic VDS to create an LDAP view of this data. Whether you choose to also copy passwords to the Gluu Server depends on whether they are in an encryption format supported by OpenDJ. If you can, it is recommended because OpenDJ password validation is super-fast.

Julian BondLouis Gray on the end of Friendfeed [Technorati links]

March 10, 2015 07:56 AM
Louis Gray on the end of Friendfeed
http://blog.louisgray.com/2015/03/friendfeeds-closure-another-painful.html

One of the amazing things about FF was that you could stand in the full stream and watch snippets of the global conversation float past every 5 seconds or so.

It's also reminded me about a long standing FAQ about G+. The Profile.About.Links YASN-Roll really ought to grab the content and feeds from those links and present them somehow. Not by auto-generating posts but perhaps filling another tab on the profile. It's still a missing piece in the Apps puzzle to aggregate all the posts and all the comments and conversations across all the platforms. The Buzz problem (and FF problem) was the potential for spammy abuse. So the answer is to make it private to the profile pages so you only get to see it if you specifically go and look.

The other challenge is that when FF was built everybody, but everybody, produced RSS/Atom feeds so there was generally something to pick up and aggregate. But then location apps like Foursquare stopped doing it. And shortly after, the major platforms stopped as well, requiring authentication, went to custom formats and generally stopped playing ball. In some ways, FF was the peak of the Web 2.0 dream of open common formats before the tide rolled back.
 FriendFeed's Closure Another Painful Loss from a Vibrant Era of Social Media »
Amidst all the Apple watch hoopla today, FriendFeed's blog announced the long-ignored social networking pioneer was finally going to be taken out back behind Facebook's brilliant new campus and be put down for good. With the ...

[from: Google+ Posts]

Drummond Reed - CordanceSelma [Technorati links]

March 10, 2015 02:49 AM

Selma_posterIt is very hard, being a white man who was only seven years old at the time, to even think I can appreciate what it was like to cross the Edmund Pettus bridge on Bloody Sunday, March 7, 1965.

But Selma takes you there. Puts you in the shoes and eyes and ears and mostly the voice of Martin Luther King Jr. David Oyelowo is practically a medium channeling that voice—in fact it was stunning to learn that it was not the actual words of Dr. King due to restrictions on the rights (so even greater plaudits to director Ava DuVernay for making them ring so true.)

Though the singing of Glory by John Legend and Common at this year’s Academy Awards was the most moving and significant Best Song in memory, it still did not offset the travesty that David Oyelowo was not nominated. What, pray God, was the Academy thinking?


Kaliya Hamlin - Identity WomanIIW is early!! We are 20!! We have T-Shirts [Technorati links]

March 10, 2015 02:03 AM

Internet Identity Workshop is only a month away. April 7-9, 2015

Regular tickets are only on sale until March 20th. Then prices to up again to late registration.

I’m hoping that we can have a few before we get to IIW #20!!

Yes it’s almost 10 years since we first met (10 years will be in the fall).

I’m working on a presentation about the history of the Identity Gang, Identity Commons and the whole community around IIW over the last 10 years.

Where have we come from?…leads to the question…. where are we going?  We plan to host at least one session about these questions during IIW.

It goes along with the potential anthology that I have outlined (But have a lot more work to get it completed).

March 09, 2015

Mike Jones - MicrosoftOAuth Proof-of-Possession draft -02 closing open issues [Technorati links]

March 09, 2015 11:32 PM

OAuth logoAn updated OAuth Proof-of-Possession draft has been posted that address the open issues identified in the previous draft. Changes were:

Thanks to Hannes Tschofenig for writing text to address the open issues.

This specification is available at:

An HTML formatted version is also available at:

Julian BondWell that's a bit sad. [Technorati links]

March 09, 2015 08:00 PM
Well that's a bit sad.
http://thenextweb.com/insider/2015/03/09/facebook-is-killing-off-friendfeed-on-april-9/
 Facebook is killing off FriendFeed on April 9 »
Remember when Facebook bought FriendFeed back in 2009? Amidst the mania of Apple Watch news, the platform today announced it's shutting down.

The reason, as you might expect, is ...

[from: Google+ Posts]