September 29, 2016

Matthew Gertner - AllPeersApps and Their Influence on the World of Entertainment [Technorati links]

September 29, 2016 08:42 PM
Apps and Their Influence on the World of Entertainment: how much has the game changed?Photo by unsplash.com

One thing the world has always loved is entertainment. But the world of entertainment has changed dramatically, particularly with the advent of the internet and of smartphones. Today, professional entertainment industry programs are incorporating apps and development into their curriculum. And this is for good reason. Entertainment is a business, businesses have to make money, and to make money, you have to market your products. So how does all this work?

Musicians and Artists

More and more musicians and artists are able to break through by using apps on smartphones. Few budding musicians have the money to record their own album, and even fewer have a chance to get signed with a label. With smartphone apps, however, they can make and share recordings of themselves with ease. Furthermore, it allows them to get up close and personal with their listeners, keeping them up to date with upcoming gigs, for instance.

Interactive TV

The TV industry is a huge element of the entertainment industry, and they have also strongly incorporated smartphone app technology. For instance, shows such as the XFactor now allow people to vote for their favorite acts through apps. This is a really good thing, because it allows people to feel more connected to the artists they support. Furthermore, once the XFactor is over and a winner has been announced, people are more likely to stay up to date with them, as they feel like they have invested in it. As such, the artists are more likely to become successful as well.

The Future of Mobile Apps and the Entertainment Industry

The reason why mobile apps are becoming so useful in the entertainment industry, and why music degrees are placing a focus on them, is because they are linked to so many different platforms. It is now easier than ever for an app to be created that allows fans to appreciate their favorite acts more, and at the same time make them more famous. But can these apps really help people reach the top?

What we are seeing is that the entertainment industry is becoming very clever at using mobile development to its advantage. Independent artists can create huge fan bases, managers can generate more ticket sales, and more. Whether someone is a complete novice artist, or whether someone is dealing with the world’s most popular television show, mobile apps are now included. And, as with many other things, consumers now expect this. They have grown accustomed to be able to interact directly with companies, service developers, individuals, and more.

So what is the future of mobile apps and the entertainment industry? Nobody knows what the future will bring, but there are suggestions. For instance, being able to live stream concerts or sporting events is likely to start very soon, with some having already started it. There are crowdfunding apps, streaming apps, downloading apps, purchasing apps, and more. Apps and their influence on the world of entertainment has turned out to be an intersection of monumental proportions in recent years. We shouldn’t be terribly surprised though; as Dylan said, the “times they are ‘a changing” – then as now, this axiom is still true.

The post Apps and Their Influence on the World of Entertainment appeared first on All Peers.

Kantara InitiativeIdentity, Smart Contracts and Blockchains – oh my! [Technorati links]

September 29, 2016 03:55 AM

Author: James Hazard, CommonAccord

The Kantara Blockchain and Smart Contracts Discussion Group (DG-BSC) launched in July 2016 to work on the connection between identity management and the new movement of blockchains and smart contracts.

These three areas – identity, smart contracts and blockchains – are converging into an interoperable peer-based platform. Smart contracts create a platform to widely codify legal documents. We invite everyone with an interest in any of these fields – law, identity, smart contracts or blockchains – to join us in the DG-BSC.

Blockchains have been widely heralded for their ability to connect parties without reliance on an “intermediary,” an owner of the ecosystem. Smart contracts are a way to combine automation with legal meaning. Ideally, smart contracts could clarify and automate all of a person’s relationships and history. While the phrase and idea of “smart contracts” predates blockchains, the ideas are now commonly run together. That has impeded the use of smart contracts, notably in connection with identity.

Prying the ideas apart, we can see smart contracts as:

The word “link” is critical. To avoid repetition, increase reuse and promote standardization, the record should rely on common components. This is unix-style architecture. Linking also promotes portability, an idea that is often lost in the blockchain discussions. In those discussions, the assumption is often that the canonical log of records for everyone will be held on a single blockchain, either one log for everything (among the maximalists) or one per domain. That is of course inefficient, insecure and, to the extent it includes personal information, often illegal.

Identity provides a better perspective. The goal must be to have each person be master of their own information, and to have that information be copied, distributed and retained elsewhere as little as possible.

Smart contracts therefore must work in wallets independent of the platform. They must be capable of independently calculating aggregate tax liabilities or overdrafts and warning of delivery and expiration dates. To the extent that blockchains are used for transactions, blockchain records must be copied to and compatible with the wallet, giving the person a full record of their transactions.

In blockchain use-cases, it is common to focus on legally “simple” uses. Of course, in law, as in life, nothing is simple, everything is connected and laced with assumptions, ambiguity, discretion, variation and chance. So, “simple,” means highly stereotyped. Transactions where there is an assumed common legal framework and the “legal” variations are limited. Variations more complex than merely price and quantity, but still tightly bounded – for instance voting protocols – or specific kinds of information exchanges. These assumptions can involve heroic reductionism regarding legal context. The most famous is the “TheDOA” disaster where a complex voting protocol wrapped itself in a declaration that its “law” was whatever the code did. Amplifying Lawrence Lessig’s famous “code is law” it essentially declared that “bugs are law.” Effectively, it created a $200 million bug bounty. A fox, declared a lawful resident of the hen house, claimed $50 million.

By embracing rather than shunning legal context, smart contracts can bring the desired transparency and efficiency across a very broad range of transacting, perhaps all of it, without running new legal risks or inventing new legal methods or concepts.

Records can link to “prose” that provides full-text versions of appropriate, well-understood, conventional legal text. The most obvious is of course “contracts.” Contracts are party-based self-governance – loosely called the “law of the parties.” In the best uses, legal text defines the expected performance, the assumptions of the parties, what to do in edge cases, and how to resolve disputes if things go badly wrong. There are paradigms – models or precedents – for all things that people actually do. Participants – businesses, agencies, courts, insurers and lawyers – know roughly what to do with each of them. Law has accumulated rules and knowledge regarding them. There is a huge platform of people and institutions that already “mine” legal meaning from legal documents.

Linking within the prose – the legal precedents – can similarly reduce redundancy, promote reuse and clarity. It can enable the legal function to work with the tools and dynamics pioneered in open source software collaboration. CommonAccord is an approach to linking legal context and code.

Identity is at the center of this. Wallet holders – human and “legal” persons – can regain control of their information. They should have full freedom over how and where to host their wallets. This requires that the record and “smart contract” layer be independent of the platform, equally able to work with and without blockchains. Linking records to their context is a simple way to do this.

Matthew Gertner - AllPeersWhy Copper Is Now so Valuable [Technorati links]

September 29, 2016 02:33 AM
Wondering Why Copper Is Now so Valuable? It can be found in everything.Photo by CC user Digon3 on Wikimedia Commons

Copper is one of the most important minerals on our planet and it can be used in a variety of different ways. From copper nails to copper health supplements, it truly is everywhere. In fact, only iron and aluminum are used more in the world. Copper is also very versatile, being used in anything from industrial machinery to art.

Copper is found in some unusual places, including computers. Plus, the Statue of Liberty is actually made from copper. This statue, a gift from France 100 years after the Declaration of Independence, was made form copper at the suggestion of the architect from France. It looks green now because copper oxidizes.

IBM used to use aluminum when it made computer chips, but they switched to copper instead. When they did this, the price of the mineral actually went through the roof. Today, it is used in a range of different computer components, including mother boards, circuits, and computer chips. By using it, the speed of computers is increased, while at the same time lowering their price.

You will also find copper in a range of residential and commercial applications. In fact, copper wiring is found in almost every appliance because it has such excellent conductivity features. It is also commonly found in the plumbing industry.

Then, there is the fact that copper was once currency. Copper pennies no longer exist, having only been minted between 1793 and 1823, but they were around. In fact, the Numismatic Society has stated that, in 1857, the copper penny was made up of around 95% copper. In the early 1980s, however, the value of copper rose dramatically, which is why the core of the penny was changed to zinc by the U.S. Mint, with only the outer layer being copper coated.

Because copper is used in so many different locations, it is becoming increasingly valuable. In a year, it almost doubled in value. Today, it costs around $3.47 for a pound of copper. Copper, in fact, has been compared to gold by many people. This is one of the reasons why it is also frequently targeted by thieves, who steal copper wire from railway tracks, roofs, and inside abandoned buildings. This has been a particular problem in New Orleans, with numerous people being arrested for theft, once even from an elementary school, and another time from a church. Similar issues have been reported in Minnesota, New Jersey, and Alabama.

Copper is in huge demand, also due to the fact that the global economy is booming. Countries in Asia, including India, China, and Japan, are growing so quickly that their demand for raw materials surpasses demand. These are also the country where many businesses make home appliances, and this means that demand is rising even quicker. Whether or not the copper industry is sustainable, and if so, for how long, is currently anybody’s guess. Who knows, maybe someone might even try to steal the Statue of Liberty (or one of its four copies in Paris, France).

The post Why Copper Is Now so Valuable appeared first on All Peers.

Matthew Gertner - AllPeersWill Chronic Pain Define Your Lifestyle? [Technorati links]

September 29, 2016 01:45 AM

"Backpain" by Eugenio "The Wedding Traveller" Wilman, in accordance with CC BY-SA 2.0

For anyone having ever suffered chronic pain, they know how uncomfortable it truly can be.

That said dealing with chronic pain impacts millions of Americans on a daily basis, though some people finally stand up (at least if they are able to) and say enough is enough.

In the event you are someone dealing with chronic pain in areas such as your neck, back, legs, shoulders, head etc. where will you ultimately go for help?

While some people will just rely on medications and hopes of one day ridding them of such discomfort, others will take action, knowing that doing so could relieve them one day.

So, will you let chronic pain define your lifestyle or will you stand up and fight back?

 Where to Turn for Assistance?

So that your fight against chronic pain is not met with minimal or zero success, keep a couple of tips in mind:

Chronic pain has the potential to change your life in a negative way for many years to come, though you certainly can have some say in the matter.

First and foremost, be willing to face the reality that a problem does in fact exist.

From there, you have to comprise a plan to fight the pain (or an injury that you may have recently suffered) with all you’ve got. That not only includes your physical efforts, but also those that fall under the mental umbrella.

As important as the physical time and efforts go into getting you better, the mental aspect certainly can’t be overlooked.

There may be (in all likelihood there will be) days where you feel like you don’t have the energy in you to get up and do the necessary exercises to be as pain-free as possible. In some cases, you will have to fight through the pain in order to get better.

Whatever it takes to put pain to rest, make sure you do it.

The post Will Chronic Pain Define Your Lifestyle? appeared first on All Peers.

Matthew Gertner - AllPeersInjecting Better Results into Your Business [Technorati links]

September 29, 2016 01:30 AM

"Apple Tree Growth" by Foam, used in accordance with CC BY-SA 2.0

Running a business in today’s world takes a combination of things.

This includes smart planning and strategies, knowing where and when to spend money, hiring the best talent out there, not to mention having a little luck along the way.

That said your company can’t typically succeed if you are not reviewing your business practices on a regular basis.

Such reviews are necessary to make sure not only your products and/or services are up to par with consumer expectations, but that you also are marketing and advertising your brand as much as possible.

So, are you doing all you can to inject better results into your business?

 Researching Ways to Improve Your Brand

In order for your brand to be at or near the top of your respective industry, remember a few tips:

  1. Products and services – Always do periodic reviews of your respective products and services, making sure they are up to the standards that consumers demand in today’s world. Remember, consumers have myriad of choices when it comes to such items and needs, so you have to stand out from the competition if you continually want to have their business and respect. Whether you are a plastic injection molding company like JDL Enterprises or one of thousands of other businesses, do your best to make sure your products and/or services don’t fall short of consumer expectations. For example, if you offer shutoff valves, shutoff or extension nozzles, drool eliminators etc. make sure the products are checked and re-checked regularly. As technology continues to improve on a yearly basis, many consumers are going to stay abreast of such changes, demanding only the best products and services from the companies they do business with;
  2. Spreading the word – Just as important as your products and services are, how you market and advertise them matters too. If you are a small business, you oftentimes are already coming at things from a disadvantage. Larger companies have the manpower and finances to typically spread the word more easily. As a smaller business, you will oftentimes have to do double the work just to get noticed. Once again, product and/or service supervision is key. Making sure you have the right quality control in place for your brand is something you never want to overlook. Having the right quality control may seem like making extra time and effort, but it can certainly pay dividends when all is said and done. For instance, do you market your products and/or services online? If not, you are missing out on a golden opportunity to score new business. In doing so, however, be sure that any all products and services are ready for their primetime debuts. Never put something out there for consumers to see if it hasn’t been tested and meets all necessary requirements. Whether it is a part or item for commercial or residential construction or countless other things, test and retest it before it goes to market and/or on display shelves and online;
  3. Customer service – Lastly, your emphasis on customer service can never be taken for granted. When it comes to quality customer service, almost all customers demand this, so you truly do not have a choice. While there are many different ways to put forth a fantastic customer service plan, it all starts and ends with keeping the customer happy. He or she has numerous choices in most cases as to where they want to do their shopping, so you can’t take them for granted during any step of the way. One of the ways to oftentimes keep them happy is by having quality follow-up care after each and every purchase or inquiry they send your way. For example, if a customer made a purchase from you, follow that purchase up with a short email or text, checking in with them to see if they were satisfied with the product, the service they received during the checkout process, and what you can do for them moving forward. For regular customers, offering them specials and deals before the general public is also a good way to keep them coming back time and time again.

Injecting better results into your business isn’t rocket science, so don’t make it harder than it has to be.

The post Injecting Better Results into Your Business appeared first on All Peers.

September 28, 2016

Matthew Gertner - AllPeersYour Guide to Creating the Best Possible ICT Suite [Technorati links]

September 28, 2016 10:42 PM

Given the central role ICT now has in the school curriculum, it’s essential that education providers have well-designed classrooms in which to teach this subject. So, if you’re preparing to create a brand new ICT suite or you’re revamping one of these learning environments, it’s important to plan the project carefully to ensure you succeed in providing a practical and inspiring space.

calumpang_nhs_ict_autocad_activity

 Get the layout spot on

 Traditionally, these suites often feature rows of desks with all students positioned facing the teacher. As well as making it hard for teachers to circulate around the rooms, this layout can prevent them from keeping a close eye on students’ screens. Another problem associated with this style of learning space is the fact that it can mean students lack sufficient desk space, which is especially problematic during theory work.

To help your students get the most from their lessons, you might want to move away from designs like this to create a more flexible space that’s uncluttered and well suited to individual computer work, theory lessons and group sessions. For example, the saw-tooth benching offered by classroom design specialists Innova Design Solutions provides students with spacious, angled desking to maximise the amount of room they have while also making it easier for teachers to manage lessons. Inspired by the way phone booths in Berlin airport accommodate suitcases, these furnishings take full advantage of the available space and provide students with more elbow room than they would have if they sat side by side. By ensuring students’ computer monitors are all facing the same way, this style of desking also makes classes easier to manage. Teachers can lead from the front at their teacher walls with all students facing them or they can position themselves at the back of the room in order to see pupils’ screens.

For further flexibility, it’s possible to combine perimeter saw-tooth benching with central workstations. This gives added flexibility for group work and theory lessons.

Make security a priority

Because of the high value of the equipment you will include in your ICT suite, it’s essential that you take the issue of security seriously. There are a range of design solutions that can help to minimise the risk of theft and vandalism. For example, you may want to include lockable computer cupboards in these rooms. It’s also possible to opt for bolt-down screens and, if you have enough space, it might even be worth creating a separate secure storage room to house high-value items when they’re not being used.

By protecting your technology, security features like these could save you stress and expense in the long-term.

Opt for high-quality, low-maintenance fixtures and fittings

 Regardless of the layout you opt for and the security solutions you incorporate into your design, it pays off to make sure that the furnishings, fixtures and fittings you select are high-quality and low-maintenance. They should be made from robust materials and be easy to keep clean. For example, opting for seamless worksurfaces will help ensure you’re able to keep your classroom free of dirt and debris. By paying attention to the finer points of ICT suite design, you will be able to keep your new learning environment looking fresh and appealing for as long as possible.

Getting ICT suite design spot on might seem like a challenge, but there is help available. For example, you can enlist the assistance of expert designers to guide you through the process. It’s certainly well worth making the effort to get this right. An effective classroom design can have a hugely positive impact on students’ behaviour and performance.

The post Your Guide to Creating the Best Possible ICT Suite  appeared first on All Peers.

Matthew Gertner - AllPeersChoosing Forklifts for Your Business: What you Need to Know [Technorati links]

September 28, 2016 10:32 PM

Forklift trucks can be used for a wide range of essential tasks, and having access to vehicles like this might help dramatically improve efficiency within your firm. However, if you’ve not chosen these trucks before for your business, you might struggle to know which ones to go for. To make your life a little easier, here are some of the most important factors you’ll need to consider when you’re making your selection.

 1280px-yale_forklift_-_2013

Fuel

Forklifts can be powered by liquefied petroleum gas (LPG), diesel or electricity, and each of these fuels has its advantages and disadvantages. It’s important to get to grips with these before you make any decisions. Gas powered vehicles are popular among many different organisations for a number of reasons. For example, they tend to be competitively priced and they are suitable for both indoor and outdoor use. In addition, as it states on the website of LPG experts https://www.flogas.co.uk/, this fuel produces lower emissions than diesel and it is highly reliable. Other benefits include speedy refuelling, an impressive power to weight ratio and quiet operation.

Diesel trucks are also quick to refuel and they are especially good for outdoor use, but they are noisier, more polluting and unsuitable for indoor operation. In contrast, electric trucks can be ideal for certain indoor tasks, but they don’t tend to be as powerful as diesel or gas powered models and they can’t be used outside. Also, recharging these vehicles takes more time, meaning they can’t be used around the clock.

 Size and features

It’s essential to think carefully about the size of trucks you require and the features you want them to have. Your vehicles will need to be able to manoeuvre comfortably in the available space and they must be able to cope with the size and weight of loads you plan to lift. To give yourself some leeway, it’s best to opt for a model that can cope with more than the heaviest load you intend to move.

Consider the specific capabilities you want your trucks to have too. The most common designs are counterbalance models, but if you need to access loads that are located very high up, you may want to opt for reach models. Meanwhile, sideloaders are effective at handling long materials like sheets and piping that may be unstable on a counterbalance vehicle.

 Price tag

Of course, price is also key. Bear in mind that there are ways to save money when you’re searching for these vehicles. For example, rather than buying new, you can purchase second-hand models at reduced prices. Just make sure that if you do this, the trucks have been fully checked and serviced before you commit to buying them. Another option if you want to avoid the upfront costs is to hire your forklifts.

As long as you consider issues like these when you’re searching for trucks for your business, you should find the perfect solutions.

The post Choosing Forklifts for Your Business: What you Need to Know appeared first on All Peers.

KatasoftTutorial: Social Login for PHP with Stormpath & ID Site [Technorati links]

September 28, 2016 11:34 AM

Social login and registration are hot features that have become “expected” of most new applications. Building these features can be difficult and fraught, and ultimately something most developers aren’t excited to tackle.

Here’s the good news: Social login is a core feature of the Stormpath PHP SDK! With Stormpath, you only need a few lines of code to build simple, robust social login and registration support for the four major social login providers: Facebook, Google, LinkedIn, and GitHub.

At Stormpath, we have two ways of using social providers and in this tutorial we are going to cover how you can use ID Site for authentication via social login in PHP. I will then convert that ID Site authorization into JWT cookie-based authentication that you can use for the remainder of the authenticated session. With Stormpath and ID Site the process for using each social provider is roughly the same. We’ll use Google in this tutorial, but you can find granular setup instructions for each provider in our ID Site documentation.

Setup Your Google and Stormpath Applications

This tutorial assumes you have some basic knowledge of setting up an application and account inside of Stormpath as well as working with the Google API Manager, so I will only hit on the key points of both. You can find more information on all the providers we support over on our product guide.

Create an Application with Google

After signing up for an account at Stormpath and saving your API keys, we need to go to Google and set up a new application. You need to create credentials for OAuth client ID. This can be done by visiting the Google Developer Console. Select Web Application and entering the data below.

NOTE: This tutorial uses Laravel Valet to serve local projects at the .dev TLD. If you do not have Laravel Valet, and you are on a Mac, I recommend checking it out as it makes local development a lot easier. The URL I’ll be using for this tutorial is http://stormpathsocial.dev/.

Since we plan to use ID Site, you will have to get your ID Site URL from Stormpath and place it in the Authorized JavaScript origins. Go to the ID Site Dashboard and copy the Domain Section. Mine is formal-ring.id.stormpath.io so I will fill out the field in google as https://formal-ring.id.stormpath.io

Stormpath ID Site Domain Name

The other area that you need to make sure is correct is the Authorized Redirect URIs section. This has to match exactly with the URI that you will send the authentication back to after Google handles it. For the following tutorial, we will use http://stormpathsocial.dev/handleCallback.php?provider=google

Google OAuth client ID Setup

NOTE: Adding the ?provider=google is not required. I do that if I plan to use other providers, so I can have a single handler for all social providers.

Once done, click Create and your keys will be displayed. Make sure you copy these down as we will need them next.

Google OAuth ID and Secret

Prepare your Stormpath Account

If you don’t currently have an account at Stormpath, go ahead and register now, I’ll wait here until you are back…

… Ok, now that you have an account at Stormpath, let’s log in and get some API keys to use for our project. From the dashboard, click on either Create API Keys if you are using a new account, or Manage API Keys and then Create API Keys if you had an existing account. This will download a file that we will be using later.

Set Up Api Keys

Next, let’s set up our Google Directory. Go to the directories page and Create Directory. Fill out the fields including your ID and Secret you received from Google during the creation of the OAuth credentials. You then need to make sure to use the same Authorized Redirect URI that you used when creating the credentials at Google, http://stormpathsocial.dev/handleCallback.php?provider=google.
Having a directory that is not mapped to an application is not going to do us much good, so let’s do that. Go your applications screen in Stormpath click the application name you want to map the directory to, and then Account Stores. Here you can click Add Account Store and select the directory you just added.

NOTE: By default, you have two applications, My Application and Stormpath. You will want to use My Application as Stormpath is the one used for administration of your account.

To use ID Site for your social login, you will need to do a little customization of the settings in Stormpath’s ID Site Settings. There are two fields here that need to be updated. The first is the Authorized Javascript Origin URLs. For this tutorial, we are going to use http://stormpathsocial.dev. The next field is the Authorized Redirect URLs where we will put in http://stormpathsocial.dev/handleCallback.php. We are leaving off the provider query param as we will handle that differently this time.

Prep Your Code Base

Since this is a clean project, I will go through each step of the project. I store all of my projects at ~/Code so in that directory, let’s create a directory StormpathSocial. Since I am using Laravel Valet, as soon as that directory is created, I can go to http://stormpathsocial.dev.

We will be using a few packages, so let’s setup a new composer.json file with the following:

{
    "name": "bretterer/stormpath-social",
    "description": "Social Login with Stormpath",
    "type": "Project",
    "require": {
        "stormpath/sdk": "^1.16",
        "vlucas/phpdotenv": "^2.4",
        "symfony/http-foundation": "^3.1"
    },
    "autoload": {
        "files": [
            "helpers.php"
        ]
    },
    "license": "Apache-2.0",
    "authors": [
        {
            "name": "Brian Retterer",
            "email": "brian@stormpath.com"
        }
    ]
}

Before running composer install make sure you create an empty file helpers.php in the root of the project. This file will contain some helper functions we will use along the way. The other packages we are using are of course the Stormpath PHP SDK for all communication with the Stormpath API. We will also be using vlucas/phpdotenv so we can set up some environment variables along with symfony/http-foundation for some functionality of setting cookies and responding with redirects.

Once the composer install is complete, we can start developing some code. Start by creating a bootstrap.php file that we will use for setting up the Stormpath Client. This will also be where we autoload our vendor file.

/bootstrap.php

require_once __DIR__ . '/vendor/autoload.php';

// Load our Env file
$dotenv = new Dotenv\Dotenv(__DIR__);
$dotenv->load();

Having this bootstrap file allows us to create our .env file to store all of our keys. The data for the Stormpath keys can be found in the file that was downloaded when you created the new API Keys. The application can be found on the application page where you mapped the directory. For the Google ID and secret, you will need to reference the keys from the developer console

/.env

STORMPATH_CLIENT_APIKEY_ID=1B8IKPVQ66PQEJ06G3X2ZIN0A
STORMPATH_CLIENT_APIKEY_SECRET=iSAvJozGbMVQReBKxoQSmHNYAEFzGf/QTDJCWtQ5bqo
STORMPATH_APPLICATION_HREF=https://api.stormpath.com/v1/applications/16k5PC57Imx4nWXQXi74HO

GOOGLE_APP_ID=1044707546568-tdmcis68j0g9qg0eh6qi8r99e045gb3u.apps.googleusercontent.com
GOOGLE_APP_SECRET=ap6CmtdVxQpa11YdwKMi_Bm5

Setup Templates with Bootstrap

To give this project a little bit of style, we are going to pull in Bootstrap. There are some parts of the template that will be used across the different pages we have, so let’s create a _partials directory that will store the header, navigation, and footer.

/_partials/head.php

<!doctype html>
<html lang="en">
<head>
    <meta charset="UTF-8">
    <meta name="viewport"
          content="width=device-width, user-scalable=no, initial-scale=1.0, maximum-scale=1.0, minimum-scale=1.0">
    <meta http-equiv="X-UA-Compatible" content="ie=edge">
    <title>Stormpath Social Example</title>

    <link href="https://maxcdn.bootstrapcdn.com/bootswatch/3.3.7/darkly/bootstrap.min.css" rel="stylesheet">

</head>
<body>

/_partials/nav.php

<nav class="navbar navbar-default">
    <div class="container-fluid">
        <div class="navbar-header">
            <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
                <span class="sr-only">Toggle navigation</span>
                <span class="icon-bar"></span>
                <span class="icon-bar"></span>
                <span class="icon-bar"></span>
            </button>
            <a class="navbar-brand" href="/">Stormpath Social Example (Id Site)</a>
        </div>

        <div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
            <?php if(null === $user) : ?>
                <ul class="nav navbar-nav navbar-right">
                    <li><a href="/login.php">Login</a></li>
                    <li><a href="register.php">Register</a></li>
                </ul>
            <?php else: ?>
                <ul class="nav navbar-nav navbar-right">
                    <li class="dropdown">
                        <a href="#" class="dropdown-toggle" data-toggle="dropdown" role="button" aria-expanded="false"><?php print $user->givenName . ' ' . $user->surname . ' ( ' . $user->email . ' ) '; ?><span class="caret"></span></a>
                        <ul class="dropdown-menu" role="menu">
                            <li class="divider"></li>
                            <li><a href="logout.php">Logout</a></li>
                        </ul>
                    </li>
                </ul>
            <?php endif; ?>
        </div>
    </div>
</nav>

/_partials/footer.php

<script   src="https://code.jquery.com/jquery-3.1.0.min.js"   integrity="sha256-cCueBR6CsyA4/9szpPfrX3s49M9vUU5BgtiJj06wt/s="   crossorigin="anonymous"></script>
    <script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.7/js/bootstrap.min.js" integrity="sha384-Tc5IQib027qvyjSMfHjOMaLkfuWVxZxUPnCJA7l2mCWNIpG9mGCD8wGNIcPD7Txa" crossorigin="anonymous"></script>
</body>
</html>

To use these in the template files, we will utilize our helper.php file. Lets create a few methods in there:

/helpers.php

/**
 * Require the footer template
 */
function getFooter()
{
    require __DIR__ . '/_partials/footer.php';
}

/**
 * Require the head template
 */
function getHead()
{
    require __DIR__ . '/_partials/head.php';
}

/**
 * Require the navigation.
 *
 * @param $user
 */
function getNav($user = null)
{
    require __DIR__ . '/_partials/nav.php';
}

Now we create our index.php file and can use the template partials.

/index.php

<?php
    require __DIR__ . '/bootstrap.php';

    getHead();
    getNav();
?>

<div class="container">
    <div class="well">
        <h2>Stormpath Social Login Example (ID Site)</h2>
        <p>
            This example is meant to show you the steps to building social login with the PHP SDK
        </p>
    </div>
</div>

<?php getFooter(); ?>

Build your Bootstrap File

Now we can begin adding the Social Login flow. The first thing we need to do is create a Stormpath Client. We will do this inside of our bootstrap.php file. We are going to build the client with the ClientBuilder class and set the API keys manually from the .env variable. Open the bootstrap file and after the loading of the Dotenv add the following.

// Create a Stormpath Client
/** @var \Stormpath\ClientBuilder $clientBuilder */
$clientBuilder = new \Stormpath\ClientBuilder();
$clientBuilder->setApiKeyProperties("apiKey.id=".getenv('STORMPATH_CLIENT_APIKEY_ID')."\napiKey.secret=".getenv('STORMPATH_CLIENT_APIKEY_SECRET'));
/** @var \Stormpath\Client $client */
$client = $clientBuilder->build();

This will pull the API keys from the environment variables and create a valid ini string to pass to the client builder. Once that is done, we build the client and set it to the variable $client that will be usable in our application.

We now need to get the application we will be using for login. while in the same Bootstrap file we need to get the application resource by adding these lines below the previous:

// Get the Stormpath Application
/** @var \Stormpath\Resource\Application $application */
$application = $client->getDataStore()->getResource(getenv('STORMPATH_APPLICATION_HREF'), \Stormpath\Stormpath::APPLICATION);

The last item in the Bootstrap file that you need to add is getting the current user if there is one logged in. Since we will be using cookies to store the access_tokens, we can look there for a valid user.

// Get the User if found
$user = null;
if(request()->cookies->has('access_token')) {
    try {
        $decoded = JWT::decode(request()->cookies->get('access_token'), getenv('STORMPATH_CLIENT_APIKEY_SECRET'), ['HS256']);
        $user = $client->getDataStore()->getResource($decoded->sub, \Stormpath\Stormpath::ACCOUNT);
    } catch (\Stormpath\Resource\ResourceError $re) {
        die($re->getMessage());
    }

This block of code sets a user variable to null so we have something to work with in the navigation if there is no user. We then look to see if the request cookies have an access_token. If a cookie exists with that name, we will decode it with the JWT library. We can use our API key secret to validate the integrity of the token, and only allow signing algorithms of HS256 since that is what Stormpath uses to sign the token. If a user exists, we update the user variable with the account object from the Stormpath SDK.

If we run into any errors, we let it be known by dying with the error message. Let’s clean this up a little bit, though. Instead of the method die(), let’s change that to error() and create the following function inside of our helpers.php file

/**
 * Prints a pre formatted error message
 *
 * @param $message
 */
function error($message)
{
    print "<pre>ERROR: {$message}</pre>";
}

Log in with Google

Now that we have all of our application Bootstrap setup done, we can now log into the Stormpath Google Directory of our application. If you take a look at your _partials/nav.php file, you can see that our login link takes us to login.php, so let’s create that now.

You might be thinking at this point that this is where all the Stormpath code comes into play. I can tell you (and show you) that you can create a login script with only four lines of code.

/login.php

use Symfony\Component\HttpFoundation\Response;

require_once __DIR__ . '/bootstrap.php';

$url = $application->createIdSiteUrl(['callbackUri' => 'http://stormpathsocial.dev/handleCallback.php']);

$response = Response::create('', Response::HTTP_FOUND, ['Location' => $url])->send();

That’s it.

This script line by line says, I want to use the Symfony Response class, which we will use at the end to redirect the user. I then want to use my bootstrap.php file to gain access to some variables we set there.

(This next part is where the magic happens, watch closely!)

Looking at the next line of the script, I want to create an ID Site URL from the Stormpath application object that I can redirect the user to. For our purposes, I need to set a callbackUri, the URI the user will be redirected back to after login is complete.

Finally, I create a response with a header parameter along with a response code of HTTP_FOUND (302) and send it. The header parameter we use, Location, along with the response code triggers the browser to issue a redirect to the URL we specify. This will send the user to ID Site where they will see a login screen with a google button.

ID Site Login Window

And there you have it. Pretty cool, right?

How to Handle Google Sign in Callback

Once the user logs in here, they will be redirected back the callbackUri which we need to create now. If you did not want to use access tokens for authentication, this could be done in a single line of code, but what’s the fun in that? Let’s make our site a little more secure and use cookie-based authentication.

We will need to create a handleCallback.php file in the root of our project where we will do the conversion of the ID Site token to access tokens.

require_once __DIR__ . '/bootstrap.php';

use Symfony\Component\HttpFoundation\Cookie;
use Symfony\Component\HttpFoundation\Response;

$exchangeIdSiteTokenRequest = new \Stormpath\Oauth\ExchangeIdSiteTokenRequest(request()->get('jwtResponse'));
$auth = new \Stormpath\Oauth\ExchangeIdSiteTokenAuthenticator($application);
$result = $auth->authenticate($exchangeIdSiteTokenRequest);

$accessToken = new Cookie("access_token", $result->getAccessTokenString(), time()+3600, '/', 'stormpathsocial.dev');
$refreshToken = new Cookie("refresh_token", $result->getRefreshTokenString(), time()+3600, '/', 'stormpathsocial.dev');
$response = Response::create('', Response::HTTP_FOUND, ['Location' => '/']);
$response->headers->setCookie($accessToken);
$response->headers->setCookie($refreshToken);

$response->send();

This file, minus the HTML files, will be the longest file you have to deal with for full authentication in your web application. Most of the file is actually just handling setting the cookies before you redirect them back to the home page.

The first block of code after the typical require and use statements deals with exchanging the ID Site token with the access tokens. This is where the core of the work is done. You will see a new function here, request(). We will need to add this to our helpers.php file. This function is a nice way of saying “get the requested object.”

/**
 * Get an instance of the Request object
 * 
 * @return \Symfony\Component\HttpFoundation\Request
 */
function request()
{
    return \Symfony\Component\HttpFoundation\Request::createFromGlobals();
}

Once we have this function, we can now access the query parameters without using the insecure PHP global $_GET or $_REQUEST.

PROTIP: I suggest that anytime you want to use a superglobal like this, you should use a package that is designed around superglobals but does not directly use them. Symfony has the best ones in my opinion and has becomethe industry standard.

Going back to the code sample, we are going to get the property jwtRespose which is a JSON web token that defines the user who just logged in. This is why I say that you could stop here as you have all the information on the user you need. We, however, are going to keep going and send that token through the \Stormpath\Oauth\ExchangeIdSiteTokenRequest class. We then authenticate against the application using the \Stormpath\Oauth\ExchangeIdSiteTokenAuthenticator passing in the application and finally the token request.

On a successful response, we will receive an access token, and a refresh token. Taking the string of both of theseJSON Web Tokens, we will store them in the cookies. For this demo, we are just using the access_token so I have set them both to expire in 3600 seconds, however, you could get the token settings from the directory to see the defined expired times and set them based on that. See the Token management section of our documentation for more information.

Once the cookie objects are created, create a response object to redirect the user back home and set the cookies on the response. After sending the response to the browser, the user is directed back home and they will be logged into the site.

Logged Into Applicaiton

Social Authentication — Logging Out

Logging out while using ID Site can be done by just clearing the cookies you set, but let’s take it one step further by clearing out the tokens from Stormpath as well to prevent someone logging in with the same cookie. Create a logout.php file where we will clear the cookies, and then send delete requests to Stormpath for both the access_token and refresh_token.

logout.php

use Symfony\Component\HttpFoundation\Cookie;
use Symfony\Component\HttpFoundation\Response;

require_once __DIR__ . '/bootstrap.php';

if(request()->cookies->has('access_token')) {
    $decoded = JWT::decode(request()->cookies->get('access_token'), getenv('STORMPATH_CLIENT_APIKEY_SECRET'), ['HS256']);
    $client->getDataStore()->getResource('/accessTokens/'.$decoded->jti, \Stormpath\Stormpath::ACCESS_TOKEN)->delete();

}

if(request()->cookies->has('refresh_token')) {
    $decoded = JWT::decode(request()->cookies->get('refresh_token'), getenv('STORMPATH_CLIENT_APIKEY_SECRET'), ['HS256']);
    $client->getDataStore()->getResource('/refreshTokens/'.$decoded->jti, \Stormpath\Stormpath::REFRESH_TOKEN)->delete();
}

$accessToken = new Cookie("access_token", 'expired', time()-4200, '/', 'stormpathsocial.dev');
$refreshToken = new Cookie("refresh_token", 'expired', time()-4200, '/', 'stormpathsocial.dev');
$response = Response::create('', Response::HTTP_FOUND, ['Location' => '/']);
$response->headers->setCookie($accessToken);
$response->headers->setCookie($refreshToken);

$response->send();

The thing you may notice here is the way I expire the cookies. I am setting up new cookies with the same names, but doing two different things here. The first is setting the value to expired. This is to make sure the JWT is cleared out of the cookie and should trigger to developers that this cookie should be expired and not used if setting the time does not work. For the time, I get the current time and subtract 4200, which should be plenty of time in the past to tell the browser to get rid of the cookie. 4200 is arbitrarily selected, feel free to choose your own favorite number of seconds.

NOTE: There is a way to create a URI through ID Site for logging out that you may want to use instead or in parallel with the method above. Using the method above will not log out you of ID Site so there is a chance that when the user goes to log in again, they will not see the ID site login screen. Logging out will be done and redirect you back to the same handleCallback page. This means you will have to do a switch on the type of token based on the JWT status. For information on this, visit Using ID Site in the docs.

Register A User with Google Social

Registration is very similar to the process for logging in. If you think about it, you are just logging into your application with a Google account. Stormpath handles the creation of the account if it does not already exist inside of your application. I provided a registration link just to show how it is done. On the register.php page that you create, you will add the same code as login.php with one minor addition during the createIdSiteUri method.

register.php

use Symfony\Component\HttpFoundation\Response;

require_once __DIR__ . '/bootstrap.php';

$url = $application->createIdSiteUrl(['callbackUri' => 'http://stormpathsocial.dev/handleCallback.php', 'path'=>'/#register']);

$response = Response::create('', Response::HTTP_FOUND, ['Location' => $url])->send();

The path as part of the array in createIdSiteUrl tells ID Site that you want to see the registration page instead of the login page. The rest will be the exact same.

Learn More!

If you have made it this far, you now understand the basics of using ID site for social authentication. The methods are the same for any of our social providers you want to use. As a bonus, the flow is the same for any SAML provider that you want to add to your site. Here are some resources that you can use to learn more about what you just read.

If you have any questions or comments about this tutorial, please feel free to reach out to us at support@stormpath.com. Follow me on twitter @bretterer.

// Brian

?>

The post Tutorial: Social Login for PHP with Stormpath & ID Site appeared first on Stormpath User Identity API.

September 27, 2016

Mark Dixon - OracleTelephone Industry Transformation – Switchboard to Dial! [Technorati links]

September 27, 2016 07:28 PM

Switchboard

This morning, I spent a while watching some old videos about transformation in the telephone industry.  Way back before my time, the growing telephone network depended on thousands of young women working as telephone operators (boys didn’t work out so well).

The need for telephone operators was so great that AT&T produced a movie “Operator!” to describe the wonderful opportunity for a career as a telephone switchboard operator!

 

However, as demand for telephone service boomed, someone estimated that it would soon take all the young women in the nation to work as telephone operators!  The solution – self-dialed telephones. it soon turned out that everyone who used a telephone became his or her own telephone operator!

But apparently, using a dial telephone was difficult enough that ever-so-scintillating training movies were produced …

Just think — most of today’s young people don’t know how to operation a dial telephone! A lost art indeed!

KatasoftHello, Stormpath! [Technorati links]

September 27, 2016 06:23 PM
Let our VW Adventures begin!A Java Hipster - Matt RaibleThe day I picked up my '66 Bus

Today, I’m pleased to announce that I’ve joined Stormpath as a Developer Evangelist!

About Me

I have a unique background; one you wouldn’t expect from a technologist. I grew up in the back woods of Montana. In fact, I was born in a log cabin, built by my grandparents, with no medical assistance except for my Dad and his hunting knife. It’s a good thing his knife was sharp because I came out with a blue head and my umbilical cord wrapped around my neck!

We didn’t have electricity or running water at The Cabin, but my sister and I didn’t know what we were missing until we started school. I lived this way for the first 16 years of my life.

Even without electricity, my Dad connected us to the internet using a 300 baud modem, a Commodore 64, and a small generator. I became inspired by the internet in the early 1990s and started writing websites before Netscape 1.0 was even released. I never intended to be a Software Developer—my degrees are in Russian, International Business, and Finance. However, I found I had a knack for it and self-taught myself everything I know.

I’ve had a fondness for frontend development since the 90s, but I jumped on the Java bandwagon in the late 90s. Before I knew it, I was slinging a lot of Java code at the .com startup where I worked. Then I got into open source: Struts, Spring, Hibernate and many others.

I created my own open source project, AppFuse, and started speaking at conferences about my experiences with open source software. For the last ten years, I’ve worked for many companies as a consultant, helping them adopt and use open source software. At the same time, I’ve traveled around the world to conferences, telling stories about my experiences.

Why Stormpath?

I’ve been consulting with Stormpath for the last six months, helping develop and release their Java SDK 1.0. During this time, I got to know the team quite well and really enjoyed my time working with them. I also realized that I was pretty good at the evangelism thing (coding, speaking, blogging, etc). I decided rather than taking time off (or working extra) to speak, blog, and work on open source, it’d be fun to get paid to do it. Stormpath has offered me that opportunity and I can’t wait to show y’all some cool projects I’ve started working on.

My involvement in open source projects will continue, as will my blog posts on https://raibledesigns.com. In fact, I don’t expect much to change at all. There’s a good chance you’ll hear from me now more than ever. I plan to continue working with the open source frameworks I love: Spring Boot, Angular, and Bootstrap. With Angular 2 released, Spring Boot taking off like a rocket, and JHipster’s Angular 2 support right around the corner—Q4 is going to be a lot of fun!

If you’d like to hear about any of these technologies, you can see my Art of Angular 2 talk this evening at vJUG 24. I plan to attend the JHipster Hackathon in Washington, DC on October 11. On October 12, I’ll be watching Stormpath’s Micah Silverman talk about Securing Java Microservices with Java JWT at the Denver Java User Group. I hope to see you at one of these events!

The post Hello, Stormpath! appeared first on Stormpath User Identity API.

OpenID.netThe Foundation of Internet Identity [Technorati links]

September 27, 2016 05:22 PM

A very brief history of OpenID Connect

KatasoftIdentity Management in Spring Boot with Twilio and Stormpath in 15 Minutes [Technorati links]

September 27, 2016 02:48 PM

Today, in less than 30 seconds or so, I was able to set up a Twilio account and send myself a text message using httpie. Another few minutes work (fewer than 5) and I had a Spring Boot application doing the same.

In about the same five minutes, you can get set up with Stormpath’s Identity Management as a service platform and learn to love auth.

We are truly living in a golden age of not-on-my-computer (cloud) services.

Just about anything you can imagine having done with computers or other devices over the last 15 years you can now sign-up for and get a proof of concept going in minutes. I remember sometime around 2007 (post iPhone 1) having a conversation with an SMS broker. After filling out a sheaf of paperwork, waiting about four months and being on the hook for lots of money, we were ready to write code against the SMS gateway. This was some arcane stuff way back then.

Ever try to roll your own identity management? Did you salt your hashes? Or, just admit it – you stored passwords in plaintext like everyone else back then.

In this post, we’ll put Stormpath and Twilio together. Here’s the scenario: When a user logs in from a new device, we want to send them an SMS notification to let them know. This is a common practice today to keep people informed about activity on their accounts. If I get a notification that a login from a new address has occurred and it wasn’t me, then I know that my account has been compromised.

Twilio + Stormpath

For the purposes of this demonstration, we’ll consider new IP address access from a new device.

The code for this post can be found here.

Set Up Stormpath

The first step is to create a Stormpath account. You can follow our Quickstart docs here. These are the basic steps:

Set up the Stormpath Spring Boot Integration

The source code for this example can be found here.

For now, don’t worry about the Twilio stuff – it’s disabled by default. In the next section, we will integrate and enable Twilio.

The Stormpath Spring Boot integration makes it easy to trigger additional actions before and after a user logs in. It’s this mechanism that we use to send Twilio messages later on. For now, we will just make sure that the post-login handler is working.

To use the Stormpath Spring Boot integration you need only include a single dependency:

<dependency>
    <groupId>com.stormpath.spring</groupId>
    <artifactId>stormpath-thymeleaf-spring-boot-starter</artifactId>
    <version>${stormpath.version}</version>
</dependency>

In this case, we are using the Spring Boot + Web MVC + Thymeleaf integration so that we can return Thymeleaf templates.

To set up our postLoginHandler, we simply need to create a Spring Boot configuration that exposes a bean:

@Configuration
public class PostLoginHandler{

    ...

    @Bean
    @Qualifier("loginPostHandler")
    public WebHandler defaultLoginPostHandler() {
        return (HttpServletRequest request, HttpServletResponse response, Account account) -> {
            log.info("Hit default loginPostHandler with account: {}", account.getEmail());
            return true;
        };
    }
}

You can fire up the Spring Boot app like so:

mvn clean install
mvn spring-boot:run

Now, you can browse to: http://localhost:8080/register to create an account in Stormpath. You can then browse to: http://localhost:8080/login. You should see something like this in the log output:

2016-09-14 22:37:18.078  ... : Hit default loginPostHandler with account: micah@stormpath.com

Huzzah! Our post-login hook is working.

A Word on CustomData

The use case we are modeling in this example is to send a text message (SMS) to a user whenever they login from a new location. In order to do that we need the user’s phone number. We also need to store an array of locations they’ve logged in from so we can determine if they are logging in from a new location.

Enter Stormpath CustomData. We knew early on that we couldn’t capture all the use cases for user data that our customers might have. So, we attached 10MB of free-form JSON data to every first-class Stormpath object, including user accounts. That’s CustomData.

We store the information for the user like so:

{
  "loginIPs": [
    "0:0:0:0:0:0:0:1",
    "104.156.228.126",
    "104.156.228.136"
  ],
  "phoneNumber": "+15556065555"
}

Here’s what it looks like in the Stormpath Admin Console:

Twilio in the Stormpath Admin Console

We’ll get back to how this CustomData is set up once we work Twilio into the mix.

Set Up Twilio

Twilio has a QuickStart that will get you up and running very quickly.

The basic steps are these:

Make sure that you run the tests and can send messages. You can test it from the command line yourself using curl or httpie:

http -f POST \
https://api.twilio.com/2010-04-01/Accounts/<account sid>/Messages.json \
To=<recipient +1...> From=<your twilio phone # - +1...>  Body="Hello there..." \
--auth <account sid>:<auth token>

Now that you know you can use your Twilio account, adding it as a dependency to the Spring Boot application is a snap:

<dependency>
    <groupId>com.twilio.sdk</groupId>
    <artifactId>twilio-java-sdk</artifactId>
    <version>(6.0,6.9)</version>
</dependency>

Tie It All Together

Earlier, we set up the Spring Boot application to perform an action after a user has successfully logged in. That action was simply to log some information. Now, we are going to integrate the ability to send a Twilio message using this same post-login handler.

@Bean
@Qualifier("loginPostHandler")
public WebHandler twilioLoginPostHandler() {
    return (HttpServletRequest request, HttpServletResponse response, Account account) -> {
        log.info("Account Full Name: " + account.getFullName());

        CustomData customData = account.getCustomData();
        String toNumber = (String) customData.get(phoneNumberIdentifier);
        List<String> loginIPs = getLoginIPs(customData);

        String ipAddress = getIPAddress(request);

        if (loginIPs.contains(ipAddress)) {
            // they've already logged in from this location
            log.info("{} has already logged in from: {}. No message sent.", account.getEmail(), ipAddress);
        } else {
            boolean messageSent = TwilioLoginMessageBuilder
                .builder()
                .setAccountSid(twilioAccountSid)
                .setAuthToken(twilioAuthToken)
                .setFromNumber(twilioFromNumber)
                .setToNumber(toNumber)
                .send("New login for: " + account.getEmail() + ", from: " + ipAddress);

            // only save the ip address if the twilio message was successfully sent
            if (messageSent) {
                saveLoginIPs(ipAddress, loginIPs, customData);
            }
        }

        return true;
    };
}

Lines 8 and 9 retrieve the user’s phone number and the list of addresses that the user has logged in from before. It pulls this information from the user’s CustomData.

Assuming they are logging in from a new location, line 18 saves the new address back to CustomData and line 20 fires off the Twilio message.

The TwilioLoginMessageBuilder is defined in the sample and uses a fluent interface.

The send method used on line 26 above first checks to make sure that Twilio is configured properly and, if so, attempts to send the message:

TwilioRestClient client = new TwilioRestClient(accountSid, authToken);

List<NameValuePair> params = new ArrayList<>();
params.add(new BasicNameValuePair("To", toNumber));
params.add(new BasicNameValuePair("From", fromNumber));
params.add(new BasicNameValuePair("Body", msg));

MessageFactory messageFactory = client.getAccount().getMessageFactory();
try {
    Message message = messageFactory.create(params);
    log.info("Message successfuly sent via Twilio. Sid: {}", message.getSid());
    return true;
} catch (TwilioRestException e) {
    log.error("Error communicating with Twilio: {}", e.getErrorMessage(), e);
    return false;
}

Let’s fire up the app and see it in action!

mvn clean install

TWILIO_ACCOUNT_SID=<your twilio account sid> \
TWILIO_AUTH_TOKEN=<your twilio auth token> \
TWILIO_FROM_NUMBER=<your twilio phone number> \
TWILIO_ENABLED=true \
java -jar target/*.jar

Hitting the front door, http://localhost:8080, you have the opportunity to log in. If you look at the log, you’ll see that the first time you log in, you don’t get a message because there’s no phone number for you on file.

Twilio setup

Twilio setup

2016-09-15 16:48:31.621  INFO: Account Full Name: micah silverman
2016-09-15 16:48:31.750  WARN: No toNumber set. Cannot proceed.

The next thing to do is to set a phone number:

Set a phone number with Twilio

Now, you can log out and log in again and you should receive the Twilio notification:

Twilio setup

2016-09-15 16:53:44.599  INFO: Account Full Name: micah silverman
2016-09-15 16:53:46.080  INFO: Message successfully sent via Twilio. Sid: SM9cd7fdfa3f8f463dbdd8f16662c13b5b

SMS from Twilio

Synergy!

In this post, we’ve taken Stormpath’s post-login handler capability and coupled it with Twilio’s SMS capability to produce new functionality greater than the two of these platforms could do separately.

Definitely a golden-age for services.

In the code repo for this post, there’s some more Spring Boot magic that’s used including dynamically loading the defaultLoginPostHandler or twilioLoginPostHandler based on config settings. To use the Twilio handler, simply set the twilio.enabled=true property in the application.properties file.

Now, go forth and glue some services together for fun and profit!

Learn More

Interested in learning more about user authentication with Spring Boot and Stormpath? We have some other great resources you can review:

  • OAuth 2.0 Token Management wth Spring Boot and Stormpath
  • Single Sign-On for Java in 20 Minutes with Spring Boot and Heroku
  • 5 Practical Tips for Building Your Spring Boot API
  • The post Identity Management in Spring Boot with Twilio and Stormpath in 15 Minutes appeared first on Stormpath User Identity API.

    September 26, 2016

    KatasoftHow to Gracefully Store User Files [Technorati links]

    September 26, 2016 01:17 PM

    When you build a web application, one thing you may need to think about is how you plan to store user files.

    If you’re building an application that requires users to upload or download files (images, documents, receipts, etc.) — file storage can be an important part of your application architecture.

    Deciding where you’ll store these files, how you’ll access them, and how you’ll secure them is an important part of the engineering process, and can take quite a bit of time to figure out for complex applications.

    In this guide, I’m going to walk you through the best ways to store files for your users if you’re already using Stormpath to handle your user storage, authentication, and authorization.

    If you aren’t already using Stormpath—are you crazy?! Go sign up and start using it right now! It’s totally free (unless you’re building a large project) and makes building secure web appications, API services, and mobile apps wayyy simpler.

    Where Should I Store Files?

    When building web applications, you’ve got a few choices for where to store your files. You can:

    1. Store user files in your database in a text column, or something similar
    2. Store user files directly on your web server
    3. Store user files in a file storage service like Amazon S3

    Out of the above choices, #3 is your best bet.

    Storing files in a database directly is not very performant. Databases are not optimized for storing large blobs of content. Both retrieving and storing files from a database server is incredibly slow and will tax all other database queries.

    Storing files locally on your web server is also not normally a good idea. A given web server only has so much disk space, which means you now have to deal with the very real possibility of running out of disk space. Furthermore, ensuring your user files are properly backed up and easily accessible at all times can be a difficult task for even experienced engineers.

    Unlike the other two options, storing files in a file storage service like S3 is a great option: it’s cheap, your files are replicated and backed up transparently, and you’re also able to quickly retrieve and store files there without taxing your web servers or database servers. It even provides fine-grained control over who can access what files, which allows you to build complex authorization rules for your files if necessary.

    For storing what can sometimes be sensitive information, a file storage service like Amazon S3 is a great way to get the best of all worlds: availability, simplicity, and security.

    To sign up for an Amazon Web Services (AWS) account, and to start using Amazon S3, you can visit their website here.

    How Do I Store Files in S3?

    Now that we’ve talked about where you should store your user files (a service like Amazon S3), let’s talk about how you actually store your files there.

    When storing files in S3, there are a few things you need to understand.

    Firstly, you need to pick the AWS region in which you want your files to live. An Amazon region is basically a datacenter in a particular part of the world.

    Like all big tech companies, Amazon maintains datacenters all over the world so they can build fast services for users in different physical locations. One of the benefits to using an Amazon service is that you can take advantage of this to help build faster web applications.

    Let’s say you’re building a website for Korean users. You probably want to put all of your web servers and content in a datacenter somewhere in Korea. This way, when your users visit your site, they only need to connect over a short physical distance to your web server, thereby decreasing latency.

    Amazon has a list of regions for which you can store files in S3 on their website here.

    The first thing you need to do is use the list above to pick the most appropriate location for storing your files. If you’re building a web application that needs to be fast from all over the world: don’t worry, just pick the AWS region closest to you — you can always use a CDN service like Amazon Cloudfront to optimize this later.

    Next, you need to create an S3 bucket. An S3 bucket is basically a directory for which all of your files will be stored. I usually give my S3 buckets the same name as my application.

    Let’s say I’m building an application called “The Greatest Test App”—I would probably name my S3 bucket: “the-greatest-test-app”.

    S3 allows you to create as many buckets as you want, but each bucket name must be globally unique. That means that if someone else has already created a bucket with the name you want to use: you won’t be able to use it.

    Finally, after you’ve picked your region and created your bucket, you can now start storing files.

    This brings us to the next question: how should you structure your S3 bucket when storing user files?

    The best way to do this is to partition your S3 bucket into user-specific sub-folders.

    Let’s say you have three users for your web application, and each one has a unique ID. You might then create three sub-folders in your main S3 bucket for each of these users — this way, when you store user files for these users, those files are stored in the appropriately named sub-folders.

    Here’s how this might look:

    bucket
    ├── userid1
    │   └── avatar.png
    ├── userid2
    │   └── avatar.png
    └── userid3
        └── avatar.png

    This is a nice structure because you can easily see the separation of files by user, which makes managing these files in a central location simple. If you have multiple processes or applications reading and writing these files, you already know your what files are owned by which user.

    How Do I “Link” Files to My User Accounts?

    Now that you’ve seen how to store files in S3, how do you ‘link’ those files to your actual Stormpath user accounts? The answer is custom data.

    Custom Data is a essentially a JSON store that Stormpath provides for every resource. This JSON store allows you to store any arbitrary JSON data you want on your user accounts. This is the perfect place to store file metadata to make searching for user files simpler.

    Let’s say you have just uploaded two files for a given user into S3, and want to store a ‘link’ to those files in your Stormpath Account for that user. To do this, you will insert the following JSON data into your Stormpath user’s CustomData resource:

    {
      "s3": {
        "some-file.txt": {
          "href": "https://s3.amazonaws.com/<bucket>/<userid>/some-file.txt",
          "lastModified": "2016-09-19T17:59:22.364Z"
        },
        "another-file.txt": {
          "href": "https://s3.amazonaws.com/<bucket>/<userid>/another-file.txt",
          "lastModified": "2016-09-19T17:59:22.364Z"
      }
    }

    This is a nice structure for storing file metadata because it means that every time you have the user account object in your application code, you can easily know:

    This JSON data makes it much easier to build complex web applications, as you can seamlessly find your user files either directly from S3, or from your user account. Either way: finding the files you need is no longer a problem.

    How Do I Secure My Files?

    So far we’ve seen how you can store files, link them to your user accounts, and manage them.

    But now let’s talk about how you can secure your user files.

    Security is a large issue for sensitive applications. Storing medical records or personal information can be a huge risk. Ensuring you take the proper precautions when working with this type of data will save you a lot of trouble down the road.

    There are several things you need to know about securely storing files in Amazon S3.

    First: let’s talk about file encryption.

    S3 provides two different ways to encrypt your user files: server side encryption and client side encryption

    If you’re building a simple web app that stores personal information of some sort, you’ll want to use client side encryption. This is the most “secure” form of file storage, as it requires you (the developer) to encrypt the files on your web server BEFORE storing them in S3. This means that no matter what happens, Amazon (as a company) can not possibly decrypt and view your stored files.

    On the other hand, if you’re building an application that doesn’t require the utmost (and usually more complicated) client side encryption functionality S3 provides, you can instead use the provided server side encryption technology. This technology allows Amazon to theoretically decrypt and read your files, but still provides a decent amount of protection against many forms of attacks.

    The next thing you need to know about are your file permissions, also known as ACLs. The full ACL documentation can be found here.

    The gist of it is, however, that when you upload files to S3, you can tell Amazon to give your files certain permissions.

    You can say things like:

    Using Amazon ACLs you can create a very fine-grained amount of control over who has access to what files, and for how long: it is an ideal system for building secure applications.

    A general rule of thumb is to only grant file permissions when absolutely necessary. Unless you’re building a public image hosting service, or storing files that are meant to be publicly accessible always (like user avatars), you’ll probably want to lock your files down to the maximum extent possible.

    Putting It All Together

    Now that we’ve covered all the main things you need to know to securely store user files with for your user accounts with S3, let’s do a quick review of what we’ve learned.

    Store All User Files in a Sub-Folder of Your S3 Bucket

    When storing user files, keep them namespaced by user IDs in your S3 bucket. This way, you can easily distinguish between user files when looking at them from your storage service alone.

    Store File Metadata in Your User Account’s Custom Data Store

    Use Stormpath’s Custom Data store to store all user file metadata. This way you have a single, simple place to reference all of your file data from your user account alone.

    If you’re not using Stormpath to store your user accounts: you’ll want to build something similar.

    Encrypt Files on S3

    If you’re building a sensitive application: use client-side encryption to encrypt your files before storing them in S3. This will keep them really safe.

    If you’re not building a sensitive application, use Amazon’s server-side encryption to help alleviate various security concerns. It’s not as secure as client-side encryption, but is better than nothing.

    Set Restrictive ACLs for Your Files

    Finally, be sure to only grant the minimal necessary permissions you need for each file you store. This way, files are not left open or accessible to people who shouldn’t see them.

    And… That’s it! If you follow these rules to storing user files, you’ll do just fine.

    Got questions? Drop me a line or tweet @ me!

    The post How to Gracefully Store User Files appeared first on Stormpath User Identity API.

    September 24, 2016

    Matthew Gertner - AllPeersSpeed Off with Better Internet Service [Technorati links]

    September 24, 2016 06:18 PM
    Those seeking Better Internet Service often look for breakneck download speedsPhoto by CC user criminalintent on Flickr

    Given the amount of time millions of Americans spend daily on the Internet, it should probably not come as a major surprise that many of them want (and demand for that matter) the best high-speed options out there.

    Whether one uses the Internet for pleasure, business, perhaps both, being stuck with slow Internet speed is akin to watching a movie with tons of commercials. Face it; most people are not going to like that idea.

    That said Internet users in search of finding high speed Internet should shop around for the best deals out there, looking for a provider able to offer the fast Internet service at a reasonable price.

    So, are you ready to speed off with better Internet service?

    Get Connected with the Best Deal Out There

    For you to be able to get top-rate Internet service at a price that is reasonable here are a couple of means to go about it:

    Is Bundling the Right Call?

    Once you have settled on an Internet service provider, make sure you do all you can to lock-in as much savings as possible.

    One way to go about this is by bundling your needs.

    For example, various Internet service providers will offer bundled packages (Internet, television, phone etc.) for a set monthly rate.

    If you think that bundled packages are just some gimmick that most or all providers throw at consumers, think again.

    As an example, say you pay $180 a month for your three main modes of information and entertainment (Internet, television, phone). Internet service is $50 a month; television is $90 a month, while phone service comes in at $40.

    Now, what if you could pay a monthly fee of $140 for all three when they are bundled? Over a year’s time, you would save some $480.

    Another important aspect in determining just what your needs are is how much usage you get out of the Internet, your television, not to mention your phone.

    If you are not watching much television, you could decide to cut out that expense and switch to the Internet to view live streaming and/or watch videos when all is said and done.

    Lastly, any service provider you decide to go with must provide stellar customer service.

    Stop for a moment and think about how you could be more than a little upset if your Internet, television, even your phone service is down for a prolonged period of time. The last thing you want to have happen is calling your Internet provider (once you have a phone to do that), only to be told they will get someone out there to look at the problem in a couple of days.

    When you’re spending good money for what you believe are good services, you want to make sure you receive your money’s worth.

    That simply translates into getting customer service that is second to none.

    If you find your Internet service provider is not dialed-in to delivering such sound customer service, start to look around at some other potential options out there. Yes, it might seem like a hassle, but you should get what you pay for.

    When it comes to finding the best Internet service on the market, put some speed into your efforts.

    The post Speed Off with Better Internet Service appeared first on All Peers.

    September 23, 2016

    Matthew Gertner - AllPeersHow to Cut Office Overheads and Still Have a Prestigious Office [Technorati links]

    September 23, 2016 05:33 AM

    Ever imagined your office operating out of a plush Macquarie Place address, handling meetings like a pro and managing your team of hundred with ease and aplomb? While your business might not quite be at the stage of multi-level hires and exciting expansion it doesn’t mean that you have to give up the dream of a glorious Sydney business address. The costs of running a business are nothing to be sneezed at, and if you ask any business owner they will tell you that their two biggest costs are staffing and their office space. Staff is something that you kinda need to have – but what if I told you that you could manage your business office at a fraction of the costs of a fixed commercial lease, AND that you could have the prestigious address that you always wanted? You might tell me I was crazy – but I’m not. I know the secret that can help, and it’s simple: a virtual office. A Virtual Office in Sydney includes a CBD address – and when you consider the decreased costs of having a virtual office compared to renting a commercial property it makes plenty of sense.

    corneroffice

    I want to go over the benefits of having a virtual office, and then explore some of the other things that you can do to cut overheads with your business.

    A virtual office

    The decision to invest in a virtual office is a great boon for your business – let’s take a look why. You can get all of the benefits of a full-service office without having to commit to a lease. You can even get a month-to-month rental of a virtual office, which is perfect if you need to be in town for a particular event or engagement and need the perks of an office without the hassle of having to manage all of the overheads. Why not save the money you would otherwise spend on an office and put it towards something that will grow the value of your company instead? Make an investment into your future.

    Limit your overheads and expenses 

    Figuring out how to best manage your client relationship is important, and is something that shouldn’t be skimped on. That said, the value that can be gained from a face to face meeting isn’t necessarily the most important thing, and oftentimes you can get the same benefit from having a Skype meeting. Think about what is the most important thing for your business and act accordingly. Figure out what kind of entertainment policy works for your business and stick to it.

    Harness the art of telecommunication
    We are very lucky to live in this digital world, and it makes sense to harness the power of it for the best benefits for your business. You don’t need admin people, you don’t need a full time receptionist on your staff, and you can reduce the costs of office space simply by linking your team with the internet. Easy!

    Save on office expenses

    A big expense for your business is often the paper and other office supplies that you find yourself using. When you switch to a remote staff and a virtual office you will notice a severe decline in the costs from running an office. There is a huge amount of paper wasted every day in the office, and if you can figure out how to cut this cost for your business then you’ll be on the right track to success.

    Have you figured out any other ways to save on your office overheads and keep your prestigious address? If you know of any, feel free to let us know!

    The post How to Cut Office Overheads and Still Have a Prestigious Office appeared first on All Peers.

    September 22, 2016

    Matthew Gertner - AllPeersImprove Your Lawn & Soil with Top-Dressing [Technorati links]

    September 22, 2016 06:05 PM
    Improve Your Lawn with top dressingPhoto by CC user evolutionx on Pixabay

    Ideally you wouldn’t have to do this in the first place. By laying quality sand and soil at the beginning before laying lawn down will hopefully avoid this process altogether. But if you want to improve your lawn and soil, then continue reading.

    A healthy lawn requires healthy soil, but it’s often difficult with an already established lawn in place. This is where top-dressing comes in. Top dressing can help with:

    Top-dressing is a method that gradually improves soil over time. As the soil breaks down , it filters through the existing soil – improving its texture and general health.

    Ideal Time to Consider Top-Dressing

    Autumn is the ideal time to consider top-dressing your lawn. This gives your grass some time to grow through three to four mowing’s and before the peak of summer and winter hit. Top-dressing can all be done at once or in stages, this entirely depends on you. Some like to plug away at it and get little bits of soil delivered at a time, whilst others prefer to order a big truckload and do it all at once. Either way the choice is yours. Here you will find more on the top-dressing process.

    How Often Should You Top-Dress?

    This really depends on your lawn and location of your home. Troublesome areas may require more attention and repeated application, however you still don’t need to do every year. The reason being is that each time you top-dress you are adding moer soil, which over time raises your grade and can affect thatch breakdown and therefore overall soil ecology. Therefore, it is essential to not go overboard. A good approach is to plan ahead. More frequent but lighter applications for troublesome yards will go a lot further as opposed to one deep application. For overall organic soil amendment, a very light application of top-dressing brushed into aeration holes can improve the soil without raising the grade.

    The post Improve Your Lawn & Soil with Top-Dressing appeared first on All Peers.

    KatasoftApache Shiro Stormpath Integration 0.7.1 Released [Technorati links]

    September 22, 2016 01:11 PM

    Welcome to the new Apache Shiro Stormpath integration! This new release features a servlet plugin, plus deeper support for Spring and Spring Boot. Until now, we have only had a basic Apache Shiro realm for Stormpath. While sufficient, this basic realm never granted access to the full suite of Stormpath services. Today, that changed!
    Shiro + Stormpath

    Servlet Plugin

    You can still use the Stormpath realm the same way you are using it today, but if you switch to the new servlet plugin you also get all of the great features you have come to expect from Stormpath, along with the benefit of having the Shiro realm created and configured for you automatically. Just drop in the dependency:

    <dependency>
        <groupId>com.stormpath.shiro</groupId>
        <artifactId>stormpath-shiro-servlet-plugin</artifactId>
        <version>0.7.1</version>
    </dependency>

    When migrating to the servlet plugin there are a few things to keep in mind:

  • You can remove the Shiro configuration in your web.xml
  • You have the option of making Shiro stateless
  • Logouts are now a POST request
  • I’ve taken the one of the original Stormpath + Apache Shiro examples and updated it to use the stormpath-shiro-servlet-plugin as a migration guide.

    Stormpath Loves Spring

    I have created Spring Boot starters for web and non web applications, as well as examples to help get you started.

    All you need to do is drop in the correct dependency:

    <dependency>
        <groupId>com.stormpath.shiro</groupId>
        <artifactId>stormpath-shiro-spring-boot-web-starter</artifactId>
        <version>0.7.1</version>
    </dependency>

    These work in conjunction with the existing Stormpath Spring modules, if you are already familiar with them, you will have no problem getting started.

    What Else?

    As if Servlet and Spring Boot Starters weren’t exciting enough, this Shiro release include a TON of other new features, like:

  • Single sign on: Support for Stormpath’s SSO service ID Site can be enabled with a single property
  • Built in login page: One less thing to worry about
  • Social login: Login and registration support for popular social providers like Google, Facebook, Linkedin, and Github
  • User registration and forget password workflows: Out of the box user management
  • Drop in servlet plugin: Just add the dependency, and forget about messing with your web.xml
  • Spring Boot starters: Both web and non-web applications work in conjunction with the Stormpath Spring Boot starters
  • Token authentication: Stateless and signed JWTs
  • New simple examples to help you get started integrating with your servlet based, Spring Boot, or standalone application
  • Better documentation
  • Giving Back to Apache Shiro

    Stormpath is committed to improving Apache Shiro, that is the big reason why I joined Stormpath in the first place. Over the next few weeks I will be delivering on a few of our more exciting promises, including: a Servlet 3.x support, improved Spring and Spring Boot support, and Guice 4.x support.

    Learn More

    To learn more about Apache Shiro, subscribe to the mailing lists, or check out the documentation. Ready to give Shiro or Stormpath a try? These awesome tutorials will get you started:

  • Hazelcast Support in Apache Shiro
  • Tutorial: Apache Shiro EventBus
  • A Simple WebApp with Spring Boot, Spring Security, & Stormpath — in 15 Minutes
  • Secure Connected Microservices in Spring Boot with OAuth and JWTs
  • Secure Your Spring Boot WebApp with Apache and SSL in 20 Minutes
  • The post Apache Shiro Stormpath Integration 0.7.1 Released appeared first on Stormpath User Identity API.

    Kantara InitiativeReal Consent Workshops: The Consent Tech Bubble Grows [Technorati links]

    September 22, 2016 01:02 AM

    By Mark Lizar and Colin Wallis

    It’s been humbling to see the growth, interest and awareness in consent tech over the last eight months and it is exciting to have Kantara right in the middle of it all.

    Over a year ago, the Kantara Initiative: Consent & Information Sharing Work Group, proposed a collaboration with the Digital Catapult, Personal Data & Trust Network (http://pdtn.org), Consent Work Group and started a one-year plan to create awareness in Consent Tech.

    To achieve this collaboration a series of five ‘Real Consent Workshops’ were facilitated. Curated by Kantara and Digital Catapult experts, the workshops delved into what would make consent and trust scale with people and personal data. Pretty exciting, groundbreaking stuff!

    The Real Consent Workshops looked at the gap between consent people find meaningful and what we have online today. With experts in various fields presenting on various topics. (for background and blog posts see http://real-consent.org)

    Since the first event there has been an incredible surge of interest. As both new laws for consent in the EU were announced and laws for the transfer of personal information were created with Privacy Shield. Both these actions have served as a catalyst for the Consent Tech market. As a result we have seen the conversation about real consent evolve into a call for summer consent tech projects. This is great news for Kantara.

    Now, the Personal Data and Trust Network (Consent Work Group) is holding a PDTN exclusive event on Sept 26th, to explore and discuss all of the great new consent tech new laws and regulations. We are going and we hope to see you there.

    This growing awareness and activity around consent tech is particularity gratifying. Kantara has long been associated with and active in consent tech. With our series of Real Consent Workshops we are once again taking a leading position in the industry.

    We are curious as to your thoughts and hopes around consent tech. Would you like Kantara to hold another series of Consent Tech Workshops? Please drop us a line with a comment or two. Let’s keep the dialogue going.

    Click the below link to register for the PDTN Event,
    Consent Tech: Creating Sustainable Real Consent
    https://www.digitalcatapultcentre.org.uk/event/creating-sustainable-real-consent/

    Mark Lizar, Consultant and Integration Technical Producer for Smart Species, LTD London, mark@smartspecies.com

    Colin Wallis, Executive Director, Kantara Initiative,
    colin@kantarainitiative.org

    Matthew Gertner - AllPeersEmbrace Your Curves This Summer With A Plus Size Swimsuit [Technorati links]

    September 22, 2016 12:59 AM

    For the past decade, plus-size bikinis have been taking the world’s beaches by storm! Instead of hiding their beauty from the world, women of all shapes and sizes are sporting sexy fashions, ranging from the high-waisted bikini-bottom to the trendy tankini. This movement towards body positivity is only growing as top models such as Ashley Graham and Robyn Lawley have been trendsetting plus-size styles in the fashion industry for years – so much so that the market has stopped catering so exclusively to smaller body types, and has begun to imitate them! However, this year’s new craze is the most fun of all; finding hot deals on these hot bikinis, in every imaginable size and style online!

    A Plus Size Swimsuit is much more stylish these days

    Trotting from window to window at the mall can be entertaining but the plethora of undersized garments will leave the average shopper reeling in anger. Malls and boutique stores have all but ignored the burgeoning plus-size market. This idiocy isn’t all bad though, as it swiftly created a boon of resistance in the fashion industry and continues to shoot the perpetrators in the foot by carving out a niche for savvy shoppers, encouraging them to turn to internet distributors for the best bargains on the world’s widest varieties of plus size swimsuit. For example, retailers such as swimsuitsforall not only offer fashionable plus size swimwear, but swimwear that is transferable from the gym to the beach,and accessories such as cover-ups for that flirtatious piece to change into once you’re finished with the water.

    Gone are the days of searching hopelessly during the off-season for any bikini that will fit your top. Gone are the days of resigning yourself to buying the one, single, hideous color in the back of some overpriced hole-in-the-wall because it’s all you could find. With the mass of specialized bikini boutiques online, never again will you as a consumer fret over finding her perfect size. It is now as simple as entering the information into the online order form. Now you have more joyous hours to spend scouring for that perfect color and pattern combination! Because swimsuitsforall has the best plus size swimwear, and because they cater exclusively to plus size shoppers, you can find every pattern and style you’re looking for in one place.

    It is clear that what these online bikini boutiques truly provide is freedom. For years, plus-size women have needlessly been forced into searching far-and-wide for weeks, just to settle on a bikini that was neither comfortable, attractive, or what they were looking for. As if the convenience of shopping from home and enjoying unlimited options isn’t enough, the vast majority of these plus-size-specialist websites ensure their products are perfect for the individual consumer by offering them the option of sending it back for a full refund. Such lofty degrees of competition have further driven up the quality of the average bikini. For decades, an inexpensive bikini of any size was guaranteed to rip or wear within a few trips to the pool. If the consumer wanted a swimsuit that would provide years of enjoyment, she was expected to shell out a couple of hundred dollars – and even then, it was often an “all sales are final” situation. Boutiques like swimsuitsforall.com combat this by introducing an Xtra Life Lycra collection for serious swimmers that is highly resistant to breaking or tearing.

    Indeed, thanks to the advent of internet-based, plus-size bikini boutiques, a meticulous shopper can guarantee herself a magnificent bargain on the perfect plus-size garment – in any style, any day of the year!

    The post Embrace Your Curves This Summer With A Plus Size Swimsuit appeared first on All Peers.

    September 21, 2016

    KatasoftSecurely Storing Files with Node, S3, and Stormpath [Technorati links]

    September 21, 2016 05:28 PM

    There are a lot of redundant problems you need to solve as a web developer. Dealing with users is a common problem: storing them, authenticating them, and properly securing their data. This particular problem is what we here at Stormpath try to solve in a reusable way so that you don’t have to.

    Another common problem web developers face is file storage. How do you securely store user files? Things like avatar images, PDF document receipts, stuff like that. When you’re building a web application, you have a lot of choices:

    1. Store user files in your database in a text column, or something similar
    2. Store user files directly on your web server
    3. Store user files in a file storage service like Amazon S3

    Out of the above choices, I always encourage people to go with #3.

    Storing files in a database directly is not very performant. Databases are not optimized for storing large blobs of content. Both retrieving and storing files from a database server is incredibly slow and will tax all other database queries.

    Storing files locally on your web server is also not normally a good idea. A given web server only has so much disk space, which means you now have to deal with the very real possibility of running out of disk space. Furthermore, ensuring your user files are properly backed up and easily, consistently accessible can be a difficult task, even for experienced engineers.

    Unlike the other two options, storing files in a file storage service like S3 is a great option: it’s cheap, your files are replicated and backed up transparently, and you’re also able to quickly retrieve and store files there without taxing your web servers or database servers. It even provides a fine-grained amount of control over who can access what files, which allows you to build complex authorization rules for your files if necessary.

    This is why I’m excited to announce a new project I’ve been working on here at Stormpath that I hope you’ll find useful: express-stormpath-s3.

    This is a new Express.js middleware library you can easily use with your existing express-stormpath web applications. It natively supports storing user files in Amazon S3, and provides several convenience methods for directly working with files in an abstract way.

    Instead of rambling on about it, let’s take a look at a simple web application:

    'use strict';
    
    const express = require('express');
    const stormpath = require('express-stormpath');
    const stormpathS3 = require('express-stormpath-s3');
    
    let app = express();
    
    // Middleware here
    app.use(stormpath.init(app, {
      client: {
        apiKey: {
          id: 'xxx',
          secret: 'xxx'
        }
      },
      application: {
        href: 'xxx'
      }
    }));
    app.use(stormpath.getUser);
    app.use(stormpathS3({
      awsAccessKeyId: 'xxx',
      awsSecretAccessKey: 'xxx',
      awsBucket: 'xxx',
    }));
    
    // Routes here
    
    app.listen(process.env.PORT || 3000);

    This is a bare-bones web application that uses Express.js, express-stormpath, and express-stormpath-s3 to provide file storage support using Amazon S3 transparently.

    This example initialization code requires you to define several variables which are all hard-coded above. This minimal application requires you to:

    Assuming you’ve got both of the above things, you can immediately start using this library to do some cool stuff.

    Uploading User Files

    First, let’s take a look at how you can store files for each of your users:

    app.get('/', stormpath.loginRequired, (req, res, next) => {
      req.user.uploadFile('./some-file.txt', err => {
        if (err) return next(err);
    
        req.user.getCustomData((err, data) => {
          if (err) return next(err);
    
          res.send('file uploaded as ' + data.s3['package.json'].href);
        });
      });
    });

    This library automatically adds a new method to all of your user Account objects: uploadFile. This method allows you to upload a file from disk to Amazon S3. By default, all files uploaded will be private so that they are not publicly accessible to anyone except you (the AWS account holder).

    If you’d like to make your uploaded files publicly available or set them with a different permission scope, you can easily do so by passing an optional acl parameter like so:

    app.get('/upload', stormpath.loginRequired, (req, res, next) => {
      // Note the 'public-read' ACL permission.
      req.user.uploadFile('./some-file.txt', 'public-read', err => {
        if (err) return next(err);
    
        req.user.getCustomData((err, data) => {
          if (err) return next(err);
    
          res.send('file uploaded as ' + data.s3['package.json'].href);
        });
      });
    });

    The way this all works is that all user files will be stored in your specified S3 bucket, in a sub-folder based on the user’s ID.

    Let’s say you have a Stormpath user who’s ID is xxx, and you then upload a file for this user called some-file.txt. This means that your S3 bucket would now have a new file that looks like this: /xxx/some-file.txt. All files are namespaced inside of a user-specific folder to make parsing these values simple.

    Once the file has been uploaded to S3, the user’s Custom Data store is then updated to contain a JSON object that looks like this:

    {
      "s3": {
        "some-file.txt": {
          "href": "https://s3.amazonaws.com/<bucketname>/<accountid>/some-file.txt",
          "lastModified": "2016-09-19T17:59:22.364Z"
        }
      }
    }

    This way, you can easily see what files your user has uploaded within Stormpath, and link out to files when necessary.

    The express-stormpath-s3 documentation talks more about uploading files here.

    Downloading User Files

    As you saw in the last section, uploading user files to Amazon S3 is a simple process. Likewise — downloading files from S3 to your local disk is also easy. Here’s an example which shows how you can easily download previously uploaded S3 files:

    app.get('/download', stormpath.loginRequired, (req, res, next) => {
      req.user.downloadFile('some-file.txt', '/tmp/some-file.txt', err => {
        if (err) return next(err);
        res.send('file downloaded!');
      });
    });

    As you can see in the example above, you only need to specify the filename, no path information is required to download a file. This makes working with files less painful as you don’t need to traverse directory paths.

    You can read more about download files in the documentation here.

    Deleting User Files

    To delete a previously uploaded user file, you can use the deleteFile method:

    app.get('/delete', stormpath.loginRequired, (req, res, next) => {
      req.user.deleteFile('some-file.txt', err => {
        if (err) return next(err);
        res.send('file deleted!');
      });
    });

    You can read more about this in the documentation here.

    Syncing Files

    Finally, this library provides a nice way to ensure your S3 bucket is kept in sync with your Stormpath Accounts.

    Let’s say you have a large web application where you have users uploading files from many different services into S3. This might result in edge cases where files that were NOT uploaded via this library are not ‘viewable’ because the file metadata has not been persisted in the Stormpath Account.

    To remedy this issue, you can call the syncFiles method before performing any mission critical tasks:

    app.get('/sync', stormpath.loginRequired, (req, res, next) => {
      req.user.syncFiles(err => {
        if (err) return next(err);
        res.send('files synced!');
      });
    });

    This makes building large scale service oriented applications a lot simpler.

    You can read more about the sync file support here.

    Wrapping Up

    Right now this library is available only for Express.js developers. If you find it useful, please leave a comment below and go star it on Github! If we get enough usage from it, I’ll happily support it for the other Stormpath web frameworks as well.

    If you have any questions about Stormpath, Express.js, or Amazon S3, also feel free to drop me a line!

    The post Securely Storing Files with Node, S3, and Stormpath appeared first on Stormpath User Identity API.

    Julian BondDon't eat the seed corn. [Technorati links]

    September 21, 2016 07:07 AM
    Don't eat the seed corn.

    We're going to need all the fossil fuel that's left to create a world where we don't need it any more.

    Discuss.

    http://cassandralegacy.blogspot.co.uk/2016/09/the-sowers-way-some-comments.html
     The Sower's Way: some comments »
    Image: sower by Vincent Van Gogh The publication of the paper "The Sower's way: Quantifying the Narrowing Net-Energy Pathways to a Global Energy Transition" by Sgouridis, Csala, and Bardi, has generated some debate on the "Ca...

    [from: Google+ Posts]
    September 20, 2016

    ForgeRockWhat’s Preventing Retailers from Implementing Omnichannel? [Technorati links]

    September 20, 2016 08:38 PM

    Antiquated Identity Infrastructure, Lack of Visibility Across Channels Keeping Retailers from Creating Frictionless Omnichannel Experiences for Shoppers

     

    These are challenging times for retailers. With so many shoppers preferring to make purchases online now, retailers with significant brick-and-mortar holdings struggle to understand their customers and tailor individual experiences across channels. We hear from a lot of retail organisations that are realising their fragmented, legacy identity and access management systems are a real barrier to omnichannel success, because they can’t support digital customer demands and business requirements. At the same time, there is growing awareness among retailers that they need to update their technologies to maintain customer loyalty and sustain growth.

    omnichannel Antiquated identity and access management infrastructure is a real barrier to retailers working to implement frictionless omnichannel customer experiences.

    Analyst research backs up these concerns: An Aberdeen report found that companies with omnichannel customer engagement strategies retain on average 89% of their customers, compared to 33% for companies with weak omnichannel customer engagement. Meanwhile, an Accenture report found that 94% of retailers surveyed noted significant barriers to omnichannel integration. The retailers we work with here at ForgeRock all are seeking to provide customers with more engaging, more convenient customer experiences. “Frictionless” is a word we hear often. But we’re also hearing that these visions for transforming digital experiences are falling short in reality due to numerous challenges.

    For one, the identity and access management technologies retailers have long relied upon to secure transactions are also known to create silos of customer data. With increasing customer privacy concerns and regulations regarding data protection and sharing emerging in Europe and the U.S, not surprisingly there’s a lot of uncertainty. There’s also a creeping consensus that the lack of continuous, intelligent security through the shopping journey is leading to greater risk of identity fraud and malicious attacks. Many retailers report that the inability to seamlessly connect users, devices, and things makes it difficult to onboard new customers, or enable returning customers to quickly access services or merchandise.

    Even advanced retailers with loyalty programs and fully built-out online operations struggle to create a complete view of the customer and their relationship with the brand as they move from in-store to online interactions. The Retail Gazette, the UK’s daily retail news publication, reports that while over 90% of retailers now sell online in the UK, nearly two thirds claim a lack of visibility across channels is the biggest problem they face.

    A lack of visibility across channels is what we’re hearing from our retail customers also. Many admit that their loyalty programs are great for capturing basic customer data, but that acting on that data to engage individual customers isn’t yet possible. Retailers see patterns, but can’t make connections — there’s no way to tailor offers or suggest new products that might be of interest. Because retailers lack the ability to proactively engage and personalise, the online experience is static. These problems are often rooted in the fact that many retailers have redundant identity systems and often don’t recognise the same customer is buying from multiple brands or has multiple roles ( for instance, a teacher shopping for classroom supplies one day could be a mom shopping for household cleaners the next day). Antiquated identity infrastructure can also present roadblocks on the journey from prospect to active customer. It’s far easier to get new users to signup, subscribe or purchase when customer identity and access management processes are swift, agile and friction-free.

    When you consider these challenges in the context of the fast-growing Internet of Things, you get a sense of just how daunting this all is for today’s retailers. One of the key concepts of the Connected Home is that connecting appliances, lighting, heating & cooling, etc., will enable homeowners to interact with retailers or services providers to, for instance, automatically deliver milk and groceries if your fridge is getting empty. Or send new lightbulbs when the old ones blow out. These kinds of scenarios are still in their early days (Amazon dash buttons are good example), and their success will depend very much on retailers solving their more immediate challenges – specifically, overcoming fragmented identity and access management infrastructure. In our next post, we’ll explore some of these solutions. How, when you can quickly connect new digital ecosystems, you can be positioned to maximise your revenue opportunities. And if you can deliver a customer experience that is seamless, personalised and secure, then you’ll be better equipped to grow a digital retail business and build lasting relationships with your users.

    Stuart Hodkinson is Regional Vice President, UK & Ireland at ForgeRock.

    The post What’s Preventing Retailers from Implementing Omnichannel? appeared first on ForgeRock.com.

    Neil Wilson - UnboundIDUnboundID LDAP SDK for Java 3.2.0 [Technorati links]

    September 20, 2016 07:37 PM

    We have just released the 3.2.0 version of the UnboundID LDAP SDK for Java. It is available for download via the LDAP.com website or from GitHub, as well as the Maven Central Repository.

    You can get a full list of changes included in this release from the release notes (or the Commercial Edition release notes for changes specific to the Commercial Edition). Some of the most significant changes include:

    MythicsThank You - 2016 Oracle Linux & Virtualization Partner of the Year - Oracle OpenWorld16 #OOW16 [Technorati links]

    September 20, 2016 06:04 PM

    Mythics was proud to accept the 2016 Oracle Linux and Virtualization North America Partner of the Year…

    Mike Jones - MicrosoftUsing Referred Token Binding ID for Token Binding of Access Tokens [Technorati links]

    September 20, 2016 12:14 PM

    OAuth logoThe OAuth Token Binding specification has been revised to use the Referred Token Binding ID when performing token binding of access tokens. This was enabled by the Implementation Considerations in the Token Binding HTTPS specification being added to make it clear that Token Binding implementations will enable using the Referred Token Binding ID in this manner. Protected Resource Metadata was also defined.

    Thanks to Brian Campbell for clarifications on the differences between token binding of access tokens issued from the authorization endpoint versus those issued from the token endpoint.

    The specification is available at:

    An HTML-formatted version is also available at:

    Matthew Gertner - AllPeersTop 4 Advantages of Having a Business Website [Technorati links]

    September 20, 2016 01:50 AM
    There are many huge Advantages of Having a Business Website, so hire a web developer todayPhoto by thebluediamondgallery.com and nyphotographic.com

    When you start your own business, make sure you also set up a website. This is essential for every business who wants to target a wider market. This also helps you reach out to people traditional marketing usually can’t reach and also generate brand awareness. You need to take advantage of the online world to keep your business growing.

    As a business owner of any kind of enterprise, it is crucial that you build your own online presence. If you want your business to thrive, then you must create a website for your company. In fact, a website is a powerful marketing tool that is absolutely beneficial to your business. You can hire a professional web designer to make a website for you or simply build your own by using the basic and free tools in web designing.

    A website does not require you to borrow money from lending companies like Kikka (https://www.kikka.com.au/) or other traditional banks. It is actually cheap to set up one. You have to pay for annual fees, but that is also cheap compared to other marketing platforms.

    Still not convinced on why you need to put up a website? Here are the advantages of having a website for your own enterprise:

    Having a business website is convenient.

    Customers want convenience all the time. It will be easier for them to shop your products or avail your services in the comfort of their homes if you have your own website. This way, potential customers can simply browse the things you offer online and select which one they are going to purchase. Thus, it is truly a smart move to create your website and advertise your own products and services online.

    Having a business website is cost-effective.

    Everyone knows that advertising through the use of Internet is low-cost. It wouldn’t hurt your pocket to build your own business website, so it is better to take advantage of it. By simply having a strategically developed website, you can reap its benefits later on. Although it would take some time to gain traffic to your website, it is still worth the try. Your online presence will matter in the long run, which enables you to advertise your company around the web.

    Having a business website is very accessible.

    Any website or social media accounts you have for your business are easily accessible by people from across the globe around the clock. With this, there is no need for potential customers to go to a physical store anymore to buy something. They can just access your website anywhere they are as well as any time of the day as they wish.

    Having a business website helps boost your sales.

    A website allows you to become visible worldwide. It makes you gain more customers through your online presence. Therefore, there is a greater possibility that you can generate more sales and that could mean a success to your business.

    It is truly crucial to create your own business website nowadays. To keep your venture off the ground, you’ve got to have an effective marketing tool that will definitely make a significant difference. And as you have read, these benefits mentioned above will make you realize how important the role of a website is to your entire business.

    The post Top 4 Advantages of Having a Business Website appeared first on All Peers.

    September 19, 2016

    KatasoftTutorial: Launch Your ASP.NET Core WebApp on Azure with TLS & Authentication [Technorati links]

    September 19, 2016 07:05 PM

    The use of TLS (HTTPS) to encrypt communication between the browser and the server has become an accepted best practice in the software industry. In the past, it was difficult and expensive to maintain the certificates necessary to enable HTTPS on your web application. No longer! Let’s Encrypt issues free certificates for any website through an automated mechanism.

    In this tutorial, we’ll look at how to use Let’s Encrypt to provide transport-layer security for a web application built with ASP.NET Core and running on Azure App Service. Once the transport layer is taken care of, we’ll add the Stormpath ASP.NET Core integration for secure user storage and authentication.

    To follow this tutorial, you’ll need:

  • Visual Studio 2015 Update 3 or later
  • A Stormpath account (you can register here)
  • An active Azure account
  • A custom domain name
  • It’s worth noting that the free service tier on Azure doesn’t allow for custom domain SSL (which is what Let’s Encrypt does), so this solution isn’t completely free. You can sign up for an Azure free trial and get $200 of credit, which covers everything you’ll need to do in this tutorial.

    To make it easy to use Let’s Encrypt with Azure, we’ll use the Let’s Encrypt Azure site extension, which has a detailed install guide. I’ll reference this guide later when we set up the extension.

    Let’s get started!

    Create a new ASP.NET Core application

    In Visual Studio, create a new project from the ASP.NET Core Web Application (.NET Core) template.
    visual-studio-create-project

    Next, choose the Web Application template. Make sure that Authentication is set to No Authentication — you’ll add it later with Stormpath.

    Although we are going to host the application on Azure, you don’t need to check the box to host the application in the cloud. We’ll set up the deployment to Azure when we’re ready to publish the application.
    Create your asp.net core webapp

    Once you click OK, Visual Studio will create the project for you. If you run the application right now, it will look like this:
    Visual Studio workaround

    Add Stormpath for auth

    With Stormpath you get a secure authentication service built right into your application, without the development overhead, security risks, and maintenance costs that come with building it yourself. To install the Stormpath ASP.NET Core plugin, get the Stormpath.AspNetCore package using NuGet.

    Then, update your Startup class to add the Stormpath middleware:

    // Add this import at the top of the file
    using Stormpath.AspNetCore;
    using Stormpath.Configuration.Abstractions;
    
    public void ConfigureServices(IServiceCollection services)
    {
        services.AddStormpath(new StormpathConfiguration()
        {
            Client = new ClientConfiguration()
            {
                ApiKey = new ClientApiKeyConfiguration()
                {
                    Id = "YOUR_API_KEY_ID",
                    Secret = "YOUR_API_KEY_SECRET"
                }
            }
        });
    
        // Add other services
    }
    
    public void Configure(IApplicationBuilder app, IHostingEnvironment env)
    {
        // Logging and static file middleware (if applicable)
    
        app.UseStormpath();
    
        // MVC or other framework middleware here
    }

    The API key ID and secret strings can be generated by logging into the Stormpath console and clicking Create API Key. The API credentials will be downloaded as a file you can open with a text editor. Copy and paste the ID and secret into your Startup class.

    Note: For production applications, we recommend using environment variables instead of hardcoding the API credentials into your application. See the documentation for how to accomplish this.

    Adding Stormpath to your ASP.NET Core project automatically adds self-service login and registration functionality (at /login and /register). You can use the [Authorize] attribute to require an authenticated user for a particular route or action method. Your user identities are automatically stored in Stormpath, no database setup or configuration required!

    To learn what else you can do with Stormpath in your ASP.NET Core project, see the quickstart in the Stormpath ASP.NET Core documentation.

    Deploy to Azure

    Now that we have a basic application with user security, let’s deploy to Azure App Service. App Service is a managed hosting service that makes it easy to deploy applications without having to set up and maintain virtual machines.

    Navigate to Build > Publish and select Microsoft Azure App Service as your publishing target.

    Deploy to Azure

    If you’ve never published to Azure, you’ll be prompted to log in with your Azure credentials. After you authenticate, you’ll see a list of your current Azure App Service resources (if you have any).

    Since this is a new project, you’ll need to set up a new Resource Group and App Service instance to host it. Click on the New button to create the required resources.
    Create an app service in Azure

    In the first field, type a name for your application. The name you pick will be the temporary Azure URL of your application (in the form of .azurewebsites.net). Enter a name for the Resource Group, and click New to create a new App Service Plan (the defaults are fine).

    Once you have populated all the fields on the dialog, click Create to provision the resources in Azure. When the process is complete, the deployment credentials will be populated for you on the next step of the Publish wizard. Click the Validate Connection button to make sure everything is working.
    Create an ASP.NET WebApp

    Clicking Publish will cause Visual Studio to build your project. If there aren’t any compilation errors, your project files will be pushed up to Azure. Go ahead and try it!

    You can verify that your application is running by visiting http://yourprojectname.azurewebsites.net in a browser. So far, so good! Now we’ll use Let’s Encrypt to enable secure HTTPS connections to your application.

    Set up Let’s Encrypt for TLS

    There are a few steps to getting Let’s Encrypt set up with your Azure App Service application:

  • Upgrade your App Service plan to one that supports Server Name Indication (SNI)
  • Map a custom domain name to your application
  • Set up the prerequisites for the Let’s Encrypt extension
  • Install and configure the Let’s Encrypt extension
  • We’ll take a look at each step in detail.

    Upgrade your App Service plan

    Unfortunately, the Free tier doesn’t have support for custom certificates. You’ll need to use the Azure portal to upgrade the App Service plan to Basic (B1) or higher. You can upgrade from App Services > (your application) > Scale up (App Service plan):

    Upgrade Azure

    Pick a tier and click on Select to upgrade your plan. If you’re using the free Azure trial, the tier cost will come out of the trial credits (so you won’t be charged anything right away).

    Map a custom domain to your application

    Let’s Encrypt issues a TLS certificate for a specific domain, so you’ll need to have a domain ready. You can buy one through the Azure portal, or at a registrar like Namecheap for $10 or less.

    You’ll need to find the IP address of your App Service application, which you can find in the Azure portal at App Services > (your application) > Custom domains:

    Set up an external IP

    Using this IP address (and the assigned hostname), create these A and TXT records in the DNS record management tool of your domain registrar:

    A    *    <ip address>
    A    @    <ip address>
    A    www    <ip address>
    TXT    *    <hostname>
    TXT    @    <hostname>
    TXT    www    <hostname>

    This looks a little different in each registrar. In the Namecheap portal, it looks like this:

    Namecheap DNS records

    Once you’ve added these DNS records, you can add the hostname in the Azure portal by clicking Add hostname:

    Add a hostname

    Pick the A Record type and wait for the validation steps to occur. If the validation isn’t successful, you’ll be prompted to fix the problem. When all the checkmarks are green, click Add hostname to save the custom domain.

    It can take some time for the DNS caches across the internet to update (up to 48 hours in some cases). You can check the status of your DNS records using the dig tool, or on the web at digwebinterface.

    Set up the prerequisites for Let’s Encrypt

    The community-built Let’s Encrypt extension for Azure has a few prerequisites that must be set up. I won’t repeat these steps here because the official wiki covers them well! Jump over to How to install: Create a service principal and follow the instructions.

    One thing that tripped me up was in the Grant permissions step: the new service principal account needs to be added to both the App Service instance and the Resource Group it resides in. In both cases, select the resource (App Service or Resource Group) and open the Access control (IAM) subpanel. Click Add and follow the steps to add the service principal account as a Contributor role.

    One final note: it can take some time for the service principal permissions to populate. I had to wait almost an hour before I could continue. If you get strange errors later on when you’re configuring the extension, you may need to give it a bit more time.

    Install and configure the Azure Let’s Encrypt extension

    Now it’s time to install and configure the Azure Let’s Encrypt site extension. Go to your site’s SCM page (https://.scm.azurewebsites.net), then to Site extensions. On the gallery tab, search for “Let’s Encrypt”. Install the 32bit version and click the Restart Site button.

    After your site restarts, press the “Play” button on the extension. If you get a “No route registered for ‘/letsencrypt/'” error, try restarting the site one more time.

    The Azure Let’s Encrypt extension page should look like this:

    Azure + Let's Encrypt

    Fill out these fields:

  • Tenant – found on the More services > Azure Active Directory > Domain names screen (in the form of .onmicrosoft.com)
  • SubscriptionId – found on the App Service > Overview screen
  • ClientId – created in the previous step
  • ClientSecret – created in the preview step
  • ResourceGroupName – found on the App Service > Overview screen
  • Check the box to update the application settings. Click Next and give the extension some time to work.

    When I first tried to save the settings, I got an error (“’authority’ Uri should have at least one segment in the path…”). If you get this error, the extension wasn’t able to automatically create the required application settings keys. Uou’ll need to manually create these keys (with the values from the fields above):

  • letsencrypt:Tenant
  • letsencrypt:SubscriptionId
  • letsencrypt:ClientId
  • letsencrypt:ClientSecret
  • letsencrypt:ResourceGroupName
  • You can create these keys in App Service > (your application) > Application settings > App settings.

    When the keys are set up correctly, you’ll see a new screen after clicking Next. Pick the custom domain you want to use, enter your email address, and click Next.

    When I first did this, I got some errors about permissions. It turns out I didn’t have the service principal account added to the Resource Group as a Contributor (see the previous section). Once I did this, and gave the permissions time to propogate, the extension worked fine.

    That’s it! When you browse to https://yourcustomdomain.com, you’ll see the certificate from Let’s Encrypt in the address bar:

    Let's Encrypt

    Notice the expiration date on the certificate? Let’s Encrypt certificates are only good for 90 days before they must be renewed. Fortunately, the Let’s Encrypt extension can take care of the renewal automatically.

    Automatic certificate renewal

    When you install the extension, it sets up a WebJob that will take care of renewing the certificate every three months. You’ll need to set up a Storage account for the WebJob so it can keep track of when it needs to run.

    Set up a storage account

    Click Storage accounts on the left panel of the Azure portal, and click the Add button. Use these settings for the new Storage instance:

  • Deployment model: Resource manager
  • Account kind: General purpose
  • Performance: Standard
  • Replication: RA-GRS
  • Storage service encryption: Disabled
  • Resource group: Use existing (pick the group that contains your application)
  • Location: East US, or wherever you like
  • It’ll take a minute or two to deploy the storage account. When it shows up in the Storage accounts list, select it and open the Access keys panel. Copy the account name and one of the access key values and create a connection string that follows this format:

    DefaultEndpointsProtocol=https;AccountName=<your_storage_account_name>;AccountKey=<your_storage_account_access_key>

    Copy the connection string and navigate to App Services > (your application) > Application settings > App settings. Create two new settings called AzureWebJobsStorage and AzureWebJobsDashboard, and paste the connection string in both.

    Start the WebJob

    Select your application in the App Services list, and restart it for good measure. Then, open the WebJobs subpanel. You should see a “letsencrypt” job in the list:
    Let's Encrypt webjob

    Select the job and click Start. The status should switch to Running. You can click Logs to view the logs and verify that the Let’s Encrypt renewal task is completing without errors. That’s it! Your certificate will now be renewed indefinitely.

    Redirecting traffic to HTTPS

    With the Let’s Encrypt certificate installed, your site can be reached via HTTPS. However, it can still be reached by plain old HTTP. Ideally, you’d want to redirect anyone who hits your site over HTTP to be automatically upgraded to HTTPS.

    This can be accomplished with a small piece of custom middleware for ASP.NET Core. In your Startup class, place this at the top of the Configure method:

    if (env.IsProduction())
    {
        app.Use(async (context, next) =>
        {
            if (context.Request.IsHttps)
            {
                await next();
            }
            else
            {
                context.Response.Redirect($"https://{context.Request.Host}{context.Request.PathBase}{context.Request.Path}{context.Request.QueryString}", true);
            }
        });
    }

    Since the Let’s Encrypt certificate won’t be available locally, this middleware is only added to the pipeline if InProduction is true. (It’s possible to install a local certificate for IIS Express to use in development, but that’s a post for another day!)

    Re-publish your application to Azure App Service using Visual Studio, and try accessing your site over HTTP. You’ll automatically be redirected to HTTPS. Awesome!

    Learn more

    With free certificates from Let’s Encrypt, there’s no reason not to enable TLS on your ASP.NET Core web applications. And, Stormpath takes care of the security around user management and authentication/authorization easily. It’s a win/win!

    Are you adding Let’s Encrypt to an ASP.NET Core application that isn’t hosted on Azure? I’d love to know what platform you’re using; let me know in the comments below.

    If you’re interested in learning more about authentication and user management for .NET, check out these resources:

  • Simple Social Login in ASP.NET Core
  • Token Authentication in ASP.NET Core
  • Tutorial: Deploy an ASP.NET Core Application on Linux with Docker
  • The post Tutorial: Launch Your ASP.NET Core WebApp on Azure with TLS & Authentication appeared first on Stormpath User Identity API.

    September 16, 2016

    Vittorio Bertocci - MicrosoftAzure AD development lands on portal.azure.com [Technorati links]

    September 16, 2016 08:15 AM

    For the longest time, I watched with envy as my Azure colleagues drove their conference demos from the shiny portal.azure.com, while I had to stick with the good ol’ manage.windowsazure.com.

    Well, guess what! Yesterday we announced that the Azure AD management features are finally appearing in preview in portal.azure.com. Jeff wrote an excellent post about it, however, as it is in his nature, he focused on the administrative angle and relegated the development features to a paragraph tantamount to a footnote. That gave me enough motivation to break the blog torpor in which I’ve slid into since finishing the book, and pen for you this totally unofficial guide to the new awesome development features in portal.azure.com. Enjoy!

    Basics

    Let’s take a look at this new fabulous portal, shall we. Pop out your favorite browser and navigate to https://portal.azure.com.

    You’ll land on a page like the below.

    image

    Where is Azure AD? Click on “More services” on the left menu, and you’ll find it:

    image

    Click on it, and the next blade will open to something to this effect:

    image

    As Jeff’s post explains, the landing page offers lots of interesting insights on your Azure AD tenant, and various hooks for management actions.

    Just for kicks, let’s take a look at the Azure AD landing page in the old portal:

    image

    The first thing that jumps to the eye: the old portal shows both VibroDirectory, the Azure AD tenant tied to my Azure subscription, and OsakaMVPDirectory, a test tenant I created when I visited Japan a couple of years ago (I need an excuse to get back there…awesome place, awesome people). That’s because the user I am signed in with, vibro@cloudidentity.net, is a user (more: an admin) in both tenants.
    I can easily choose what tenant I want to manage by clicking the corresponding entry.

    How do I achieve the same effect in portal.azure.com? Simple. See that user badge on the top right corner, informing you about what user and tenant are you currently signed in with? Click on it:

    image

    Together with the usual account operations you expect to find there, you’ll also notice that all the tenants accessible by your user will be available for you to choose. Let’s see what happens if I select OsakaMVPDirectory.

    image

    Voila’! The portal changed to reflect the new tenant. As you can see, the landing page is far more barren… I’ve used that tenant just for playing a bit with Azure AD, nothing more.

    In fact, this is far more barren than you would probably expect from something displayed in an Azure portal… and here there’s the kicker: that’s because this tenant has no Azure subscription associated to it! Don’t believe me? Click on all subscriptions.

    image

    That’s right. This is huge, so let me rephrase to make sure you appreciate the implications:

    You now have a portal you can use to manage Azure AD tenants that are NOT associated to an Azure subscription.

    The office developers among you are probably jumping up and down right now Smile go ahead, try it! Navigate to portal.azure.com and sign in with your office dev account for your Office tenant, I’l wait. See? that’s awesome!

    Now, don’t get me wrong. Having Azure AD capabilities alongside all the other Azure services you are using in your solution is a huge advantage in itself and I am in no way trying to minimize that. I am just excited that the Azure AD development portal capabilities are no longer strictly subordinated to that.

    Enough of this – let’s take a look at the meat of the developer features: application creation and editing.

    App creation and editing

    Let’s go back to the Azure AD landing page on portal.azure.com. Where are the developer features? If you thought “Enterprise applications” – sorry, no bonus. The developer features are all available behind the sibylline moniker “App registrations”. Click on it, and you’ll find yourself on the following blade.

    image

    Those are all the apps created in this tenant – that is applications for which the Application entity resides on this very tenant.
    Let’s compare with the same view on the old portal.

    image

    Some important differences jump to the eye:

    Let’s pick one app and see how it looks like.

    image

    The first blade, Essentials, presents a quick summary of the main properties of the app. The settings blade, which opens automatically as soon as you select the app, corrals all the app properties in a neat set of categories. There’s even a nice search field that will show you in which bucket you’ll find the property you need.
    Nearly all the old properties are there: the rather large image below shows the mapping between old and new. I recommend you click on the pic to display the full image.

    PortalMapping

    Most notably, the dev features in the new portal do not offer any of the operations that would affect the ServicePrincipal of your app – that is to say, the instance of the app in your own tenant. In the old portal, creating an app meant both creating an Application object (the blueprint of your app) and provisioning that app right away in your own tenant. In the new portal, creating an app means just creating the blueprint, the Application. The user assignment, app role assignments etc are available in the admin portion of the portal – but you’ll be able to use those against your app only if you do provision it in your own tenant after creation.
    If you want to provision your app in your own tenant: you need to run it, attempt signing in with one user from your tenant with the right privileges, and granting consent when prompted. That will lead to the provisioning of the app, that is to say the creation of the ServicePrincipal in your tenant and the assignment of the permissions you consented to (VERY detailed description of the process in this free chapter).

    There are lots of neat features tucked in those options, especially in the ones that have been historically difficult to deal with in the old portal. Let’s take a look at my two favorites: permission management and manifest editing.

    If you go to the Required permissions blade (finally a good name) and click on Add, you’ll find yourself at the beginning of a nice guided experience:

    image

    Clicking on Select an API, I get to a clean list of what’s available – even including a search box.

    image

    Let’s click on the Microsoft Graph and hit Select.

    image

    Now, isn’t that super neat! You get a nice list of permissions, subdivided by application and delegated… and you even get indications on what permissions can only be consented by administrators vs all users! Personally, the colors give me cognitive dissonance: as a developer who isn’t often an admin, the permissions requiring admin consent are the problematic ones. But! The information is there, and that wasn’t the case before.

    The other feature I really like, and I am sure it will be your favorite too, is the inline editing of the manifest.
    Azure AD applications have lots of settings that can’t be accessed via portal – and sometimes, it’s just better to be able cut & paste settings directly. For that purpose, the old portal offered the ability to download the app manifest (a JSON dump of the Application object, really), edit it locally, and re-upload it to apply changes.
    In the new portal, however, you can edit the manifest in place – no need to go through the download-edit-upload cycle! You can access the feature by going back to the Essentials blade and clicking on Edit manifest.

    image

    There’s even some rudimentary auto completion support, which is great for people like myself with non-existing memory for keywords.

    Try it out!

    As diligently reported by the header of each and every blade, this stuff is still in preview. Your input is always super valuable – the right place to provide it in this case is in the ‘Admin Portal’ section of our feedback forum.

    I hope you’ll enjoy this feature as much as I plan to enjoy shedding my old portal complex and finally use portal.azure.com at the next conference… which by the way it’s just 10 days away! See you in Atlanta Smile

    September 14, 2016

    KatasoftSpring Boot WebMVC – Spring Boot Technical Concepts Series, Part 3 [Technorati links]

    September 14, 2016 04:11 PM

    Spring Boot, with Spring Boot WebMVC, make it easy to create MVC apps with very clear delineations and interactions. The Model represents formal underlying data constructs that the View uses to present the user with the look and feel of the application. A Controller is like a traffic cop. It receives incoming requests (traffic) and routes that traffic according to your application’s configuration.

    This is a huge upgrade from the early days of JSP, when it was not uncommon to have one (or a small number of) files that were each dependent on their baked-in logic. It was basically one giant View with internal logic to deal with inputs and session objects. This was a super bad design AND it didn’t scale well. In practice, you ended up with bloated, monolithic template files that have a heavy mix of Java and template code.

    So, how do we use Spring Boot to create web MVC applications? I can’t wait to show you how simple it is! We’ll start with a very simple RESTful API example and then expand the example to use Thymeleaf, a modern templating engine.

    The code used throughout this post can be found here. The examples below use HTTPie, a modern curl replacement.

    Looking for a deeper dive? In the next post of our Spring Boot Technical Series we’ll dig even deeper into Thymeleaf for form validation and advanced model handling.

    Set Up Your pom.xml in Spring Boot WebMVC

    Here’s a snippet from the pom.xml file. The only dependency is spring-boot-starter-web (the parent takes care of all versioning):

    ...
    
    <parent>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-parent</artifactId>
      <version>1.4.0.RELEASE</version>
    </parent>
    
    ...
    
    <dependencies>
    
      ...
    
      <dependency>
        <groupId>org.springframework.boot</groupId>
        <artifactId>spring-boot-starter-web</artifactId>
      </dependency>
    
      ...
    
    </dependencies>
    
    ...

    Build a RESTful API in 10 Lines

    Let’s take a look at the simplest Spring Boot MVC “Hello World” application:

    @SpringBootApplication
    @RestController
    public class SpringMVCApplication {
        public static void main(String[] args) {
            SpringApplication.run(SpringMVCApplication.class, args);
        }
    
        @RequestMapping("/")
        public String helloWorld() { return "Hello World!"; }
    }

    To tell the truth, we are cheating just a little bit here. We’ve put the entire application in a single class and it’s responsible for the Controller as well the Spring Boot application itself.

    In this case, there really isn’t a model or a view. We’ll get to that shortly.

    The @SpringBootApplication annotation makes it (not surprisingly) a Spring Boot application. The annotation actually is a shorthand for three other annotations:

  • @Configuration – Tells Spring Boot to look for bean definitions that should be loaded into the application context in this class (we don’t have any).
  • @EnableAutoConfiguration – Automatically loads beans based on configuration and other bean definitions.
  • @ComponentScan – Tells Spring Boot to look for other components (as well as services and configurations) that are in the same package as this application. This makes it easy to setup external Controllers without additional coding or configuration. We’ll see this next.
  • We also get some additional Spring Boot autoconfiguration magic wherein if it finds spring-webmvc on the classpath, we don’t have to explicitly have the @EnableWebMvc annotation.

    So, @SpringBootApplication packs quite a punch!

    The @RestController annotation tells Spring Boot that this class will also function as a controller and will return a particular type of response – a Restful one. This annotation is also multiple annotations bundled up into one:

  • @Controller – Tells Spring Boot that this is a controller component
  • @ResponseBody – Tells Spring Boot to return data, not a view
  • You can fire up this application and then run:

    http localhost:8080

    You’ll see:

    HTTP/1.1 200
    Content-Length: 12
    Content-Type: text/plain;charset=UTF-8
    Date: Tue, 06 Sep 2016 19:48:46 GMT
    
    Hello World!

    Working with Models

    The previous example was really only a Controller. Let’s add in some Models. Once again, Spring Boot WebMVC makes this super easy.

    First, let’s see what’s going in our Controller:

    @RestController
    public class MathController {
    
        @RequestMapping(path = "/maths", method = POST)
        public MathResponse maths(@RequestBody MathRequest req) {
            return compute(req);
        }
    
        private MathResponse compute(MathRequest req) {
          ...
        }
    }

    On line 4, we see that the maths method will only accept POST requests.

    On line 5, we see that the maths method returns a model object of type MathResponse. And, the method expects a parameter of type MathRequest.

    Let’s see what a request produces:

    http -v POST localhost:8080/maths num=5 operation=square
    
    POST /maths HTTP/1.1
    ...
    {
        "num": "5",
        "operation": "square"
    }
    
    HTTP/1.1 200
    ...
    {
        "input": 5,
        "msg": "Operation 'square' is successful.",
        "operation": "square",
        "result": 25,
        "status": "SUCCESS"
    }

    Notice that a JSON object is passed in and a JSON object is returned. There’s some great Sprint Boot magic going on here.

    Jackson for Java to JSON Mapping

    In the old days before Spring Boot and Spring Boot WebMVC (like 2 years ago), you had to manually serialize incoming JSON to Java objects and deserialize Java Objects to be returned as JSON. This was often done with the Jackson JSON mapper library.

    Spring Boot includes Jackson by default and attempts to map JSON to Java Objects (and back) automatically. Now, our controller method signature starts to make more sense:

    public MathResponse maths(@RequestBody MathRequest req)

    Note: Remember from before that using the @RestController annotation automatically ensures that all response are @ResponseBody (that is, data – not a view).

    Here’s our MathRequest model object:

    public class MathRequest {
    
        private int num;
        private String operation;
    
        // getters and setters here
    }

    Pretty straightforward POJO here. Jackson can easily handle taking in the JSON above, creating a MathRequest object and passing it into the maths method.

    Here’s our MathResponse model object:

    @JsonInclude(Include.NON_NULL)
    public class MathResponse {
    
        public enum Status {
            SUCCESS, ERROR
        }
    
        private String msg;
        private Status status;
        private String operation;
        private Integer input;
        private Integer result;
    
        // getters and setters here
    }

    Notice that in this case, we’re using the @JsonInclude(Include.NON_NULL) annotation. This provides a hint to Jackson that says any null value in the model object should be ignored in the response.

    Return A View For the Full MVC Experience

    In the last section, we added Model capabilities to our application to go along with the Controller. To round out our conversation, we will now make use of Views.

    To do this, we’ll start by adding in Thymeleaf as a dependency to our application. Thymeleaf is a modern templating engine that’s very easy to use with Spring Boot.

    We simply replace:

    <artifactId>spring-boot-starter-web</artifactId>

    With:

    <artifactId>spring-boot-starter-thymeleaf</artifactId>

    Let’s take a look at our controller:

    @Controller
    public class MathController {
    
        @Autowired
        MathService mathService;
    
        @RequestMapping(path = "/compute", method = GET)
        public String computeForm() {
            return "compute-form";
        }
    
        @RequestMapping(path = "/compute", method = POST)
        public String computeResult(MathRequest req, Model model) {
            model.addAttribute("mathResponse", mathService.compute(req));
    
            return "compute-result";
        }
    }

    For both methods, computeForm and computeResult, the path is the same: /compute. That’s where the method attribute comes in. computeForm is only for GET requests and computeResult is only for POST requests.

    computeForm simply returns a template called compute-form. Using the default location for templates, we create the file: src/main/resources/templates/compute-form.html. This displays a simple form for input:

    The computeResult method, takes a MathRequest and Model objects as parameters. Spring Boot does it’s magic using Jackson as described before to take the form submission and marshall it into a MathRequest object. And, Spring Boot automatically passes in the Model object. Any attribute added to this model object is available to the template ultimately returned by the method.

    The line: model.addAttribute("mathResponse", mathService.compute(req)); ensures that the resulting MathResponse object added to the model, which is made available to the returned template. In this case, the template is compute-result.html:

    ...
    <div th:if="${mathResponse.status.name()} == 'ERROR'">
        <h1 th:text="'ERROR: ' + ${mathResponse.msg}"/>
    </div>
    <div th:if="${mathResponse.status.name()} == 'SUCCESS'">
        <h1 th:text="${mathResponse.input} + ' ' + ${mathResponse.operation} + 'd is: ' + ${mathResponse.result}"/>
    </div>
    ...

    The above snippet is the Thymeleaf syntax for working with the mathResponse object from the model. If there was an error, we show the message. If the operation was successful, we show the result.

    Now I Know My MVC, Won’t You Sing Along With Me?

    Here’s a partial view of the project structure:

    .
    ├── java
    │   └── com
    │       └── stormpath
    │           └── example
    │               ├── controller
    │               │   ├── MathController.java
    │               │   └── MathRestController.java
    │               └── model
    │                   ├── MathRequest.java
    │                   └── MathResponse.java
    └── resources
        └── templates
            ├── compute-form.html
            └── compute-result.html

    The Models used in the example are MathRequest and MathResponse. The Views are in the templates folder: compute-form.html and compute-result.html. And the Controllers are MathRestController and MathController.

    Having the concerns separated in this way makes for a very clear and easy to follow application.

    In the next installment of the Spring Boot series, we will delve deeper into Thymeleaf templates, including form validation and error messaging.

    Learn More

    Need to catch up on the first two posts from this series, or just can’t wait for the next one? We’ve got you covered:

  • Default Starters — Spring Boot Technical Concepts Series, Part 1
  • Dependency Injection — Spring Boot Technical Concepts Series, Part 2
  • Secure Your Spring Boot WebApp with Apache & LetsEncrypt SSL in 20 Minutes
  • Tutorial: Build a Flexible CRUD App with Spring Boot in 20 Minutes
  • Watch: JWTs in Java for Microservices and CSRF Prevention
  • The post Spring Boot WebMVC – Spring Boot Technical Concepts Series, Part 3 appeared first on Stormpath User Identity API.

    September 13, 2016

    OpenID.netHarmonizing IETF SCIM and OpenID Connect: Enabling OIDC Clients to Use SCIM Services [Technorati links]

    September 13, 2016 07:18 PM

    OpenID Connect(OIDC) 1.0 is a key component of the “Cloud Identity” family of standards. At Oracle, we have been impressed by its ability to support federated identity both for cloud business services and in the enterprise. This is the reason why we recently joined the OpenID Foundation as a Sustaining Corporate Member.

    In addition to OIDC, we are also strong proponents of the IETF SCIM standard. SCIM provides a JSON-based standard representation for users and groups, together with REST APIs for operations over identity objects. The schema for user objects is extensible and includes support for attributes that are commonly used in business services, such as group, role and organization. 

    Federated identity involves two components: secure delivery of user authentication information to a relying party (RP) as well as user profile or attribute information. Many of our customers and developers have asked us: can OIDC clients interact with a SCIM endpoint to obtain or update identity data? In other words, can we combine SCIM and OIDC to solve a traditional use-case supported by LDAP for enterprise applications (bind, attribute lookup) recast for the modern frameworks of REST and cloud services.

    Working collaboratively with other industry leaders, we have published just such a proposal[1]. The draft explains how an OpenID Connect RP can interact with a SCIM endpoint to obtain or update user information. This allows business services to use the standard SCIM representations for users and groups, yet have the information conveyed to the service in a single technology stack based upon the OIDC protocols.

    SAML, OIDC, SCIM and OAuth are the major architectural “pillars” of cloud identity. We would like to see them work together in a uniform and consistent way to solve cloud business service use-cases. Harmonizing SCIM and OIDC is an important step in that direction.

    Prateek Mishra, Oracle

    [1] http://openid.net/specs/openid-connect-scim-profile-1_0.html   

    KatasoftAuthentication with Salesforce, SAML, & Stormpath in 15 Minutes [Technorati links]

    September 13, 2016 04:41 PM

    Salesforce is a popular business software platform with many functions and features – not just a CRM For B2B applications. Allowing users to log in with their Salesforce credentials is necessary functionality, but working with SAML is often a developer’s least favorite task. That’s where Single Sign-On with the Stormpath Java SDK and Spring Boot integration come in.

    In this tutorial, I’ll walk you through how simple it is to configure SAML single sign-on with Stormpath and connect it to Salesforce.

    Setup Salesforce to Connect to Stormpath

    To begin, we have to enable SAML on both the Stormpath and Salesforce sides and then connect the two. We do this via the Salesforce front-end and the Stormpath Admin Console screens. To connect Salesforce to our Stormpath tenant we need to modify three parts of the global settings from Salesforce — the Identity Provider, Single Sign-On, and the Connected App.

    All of these settings can be found under Setup Home when clicking on the gear icon on the top-right.

    Identity Provider

    SAML breaks authentication into three parts – the User, the Service Provider, and the Identity Provider. The identity provider provides access to the service. The most common identity providers are Facebook and Google. You probably have seen the ‘Login with Google’ buttons on various sign-in pages.

    We need to set our Salesforce instance up as an Identity Provider. The screen for this is under Settings > Identity > Identity Provider.

    Just click on Enable Identity Provider. Then click Save and download both the Certificate and Metadata (which we will use in a moment).

    Single Sign-On

    The term Single Sign-On (SSO) encapsulates what SAML allows — users accessing various sites and resources with one credential. We enable this on Salesforce by going to Settings > Identity > Single Sign-On Settings. Click Edit, check ‘SAML Enabled’, and then click Save. Finally, click ‘New from Metadata File’, select the metadata we just downloaded and click Create. Don’t worry about filling in details.

    Connected App

    The last part of our three-part Salesforce configuration is Apps. Apps are how Salesforce enables functionality. Go to Platform Tools > Apps > Apps. Scroll down to the Connected Apps section and click New. Type in a name and email (anything will do), scroll down to the Web App Settings and check Enable SAML. Type anything you like into the Entity ID (like ‘changeme’) and ACS URL (like ‘http://example.com’), we’ll be filling these in with details from Stormpath shortly, then set the Name ID Format to emailAddress, and click Save.
    Salesforce connected apps

    Click on Manage and make a note of the SP-Initiated Redirect Endpoint. We’ll be using these details in our Stormpath configuration.

    Setup Your SAML Integration in Stormpath

    The second half of our setup tasks happen in your Stormpath Admin Console. Primarily this involves three things — creating a SAML Directory, linking your Application, and configuring Mapping Attributes.

    Create a SAML Directory

    In the Directories tab, click on Create Directory, select SAML from the Directory Type, and give it a name. Enter in the endpoint we just mentioned into both URL fields (Login/Logout) and copy the contents of the certificate we downloaded into the Cert box. Make sure the Algorithm is RSA-SHA256 and click the create button. Your new directory should be shown in the directories list.

    Stormpath SAML Admin Console

    Link Your Application

    Before we move on to the Stormpath Application, we need to link the directory we just created to our Salesforce Application. We’ll use fields Entity ID and ACS URL. For each, enter the directory HREF (you can see it on click) and the Assertion Consumer Service URL (seen in the Identity Provider tab, and the bottom of the directory page), respectively. Just click on Edit, change the fields, and click Save.
    Salesforce WebApp Settings

    Configure Your Account Store

    Now we need to set up the application you link to when authenticating via Stormpath. Open up the application you intend to use via the Applications tab. Make sure the Authorized Callback URIs contains the URL of your user interface. (If you are running the app locally, the callback should be http://localhost:8080/stormpathCallback).

    Click on the Account Stores navigation button and then Add Account Store. You should be in the Directories tab from which you can select the directory we created above. Click Create Mappings. A mapping should appear in the list of stores for your application.
    SAML Account Store Config

    Booting with Spring Boot

    To determine if our initial setup has been successful, we need an application that is linked to Stormpath. We have a sample setup here for this tutorial. You will need to update the application.properties file in src/main/resources to point to your application and use the right keys.

    stormpath.application.href = https://api.stormpath.com/v1/applications/5ikoEqLaKz1Rocw2QuRjpM
    stormpath.apiKey.id = <your api key>
    stormpath.apiKey.secret = <your api secret>

    Note: In production, you shouldn’t put your application href and keys into application.properties. It’s better to use environment variables instead of baking this into code.

    You should now be able to boot up directly using Maven.

    mvn spring-boot:run

    Browsing to localhost:8080 should show you a simple homepage.

    Local Host -- Salesforce / Spring Boot WebApp

    Clicking on the Restricted button will show the login screen which now has a Salesforce login button.
    Stormpath Login Screen with Salesforce

    Clicking on the Salesforce button should take you to a Salesforce login page.

    Salesforce Login

    Once you log in, you will be taken back to the Spring Boot Application page, but now with a hello message displayed.

    Restricted View

    The reason we’re seeing NOT_PROVIDED is because we haven’t set up our attribute mappings.

    Configure Attribute Mappings

    So far all we’ve set up is how we identify the user, and that’s via username. (We set it using the Name ID Format in Salesforce when we created our application). However, if we look at the template used to generate our logged-in homepage we can see it uses the fullName on the account, which we haven’t mapped yet.

    <h1 th:if="${account}" th:inline="text">Hello, [[${account.fullName}]]!</h1>

    In Stormpath the account fullname is built from the given and last names. See this explanation from the Stormpath documentation to learn more about account fields.

    For now, we need to map those values onto the SAML data from Salesforce, and then from the SAML data to the relevant Stormpath values.

    From Salesforce

    Inside of your application, at the bottom, is a section called Custom Attributes.

    Salesforce Custom Attributes

    Click on the New button. This will bring up a dialogue with Key and Value fields. Inside Key put ‘firstname’. Then click on Insert Field, click on $User > and then First Name, and then click Insert. This will put the correct string into the Value field which is the user’s first name. Click Save.

    Do this again for the user’s last name and you should have two custom attributes defined.
    Salesforce Custom Attributes

    To Stormpath

    In the Stormpath Admin click the Directories tab, select the directory we created above and scroll down to the the Attribute Mappings tab. When you click into that tab you should see three columns – Attribute Name, Attribute Name Format, and Stormpath Field Names. For the first column put in firstname and for the last put in givenName (the middle field is optional). Then for another row put in lastname and surname, respectively.
    Stormpath SAML Admin Console

    Click save!

    Restart Your Application = Success!

    Now if we restart our local application and login again, we should see the user’s (in this case my) first and last name pulled in from Salesforce.
    Salesforce SAML Login Screen

    Learn More

    As you’ve hopefully seen from this tutorial, setting up single sign-on with Stormpath and Salesforce makes working with SAML a breeze! To learn more about authentication with Stormpath, or our SAML integration, check out these resources:

  • Watch: No-Code SAML Support for SaaS Applications
  • Build a No-Database Spring Boot Application with Stormpath Custom Data
  • Add Google Login to Your Java Single Sign-On Setup
  • The post Authentication with Salesforce, SAML, & Stormpath in 15 Minutes appeared first on Stormpath User Identity API.

    September 12, 2016

    KatasoftSecure Your Spring Boot WebApp with Apache and LetsEncrypt SSL in 20 Minutes [Technorati links]

    September 12, 2016 06:34 PM

    letsencrypt-logoSpring Boot can run as a standalone server, but putting it behind an Apache web server has several advantages, such as load balancing and cluster management. Now with LetsEncrypt it’s easier than ever (and free) to secure your site with SSL.

    In this tutorial, we’ll secure an Apache server with SSL and forward requests to a Spring Boot app running on the same machine. (And once you’re done you can add Stormpath’s Spring Boot integration for robust, secure identity
    management that sets up in minutes.)

    Set Up Your Spring Boot Application

    The most basic Spring Boot webapp just shows a homepage. Using Maven, this has four files: pom.xml, Application.java, RequestController.java, and home.html.

    The pom.xml file (in the root folder) declares four things: application details, starter parent, starter web dependency, and the Maven plugin (for convenience in running from the console).

    <project>
    
        <modelVersion>4.0.0</modelVersion>
    
        <groupId>com.stormpath.sample</groupId>
        <artifactId>basic-web</artifactId>
        <version>0.1.0</version>
    
        <parent>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-parent</artifactId>
            <version>1.4.0.RELEASE</version>
        </parent>
    
        <dependencies>
           <dependency>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-starter-web</artifactId>
            </dependency>
        </dependencies>
    
        <build>
            <plugins>
                <plugin>
                    <groupId>org.springframework.boot</groupId>
                    <artifactId>spring-boot-maven-plugin</artifactId>
                </plugin>
            </plugins>
        </build>
    
    </project>

    Application.java (src/main/Java/com/stormpath/tutorial) simply declares the app part of Spring Boot.

    @SpringBootApplication
    public class Application  {
        public static void main(String[] args) {
            SpringApplication.run(Application.class, args);
        }
    }

    RequestController.java (src/main/java/com/stormpath/tutorial) maps all requests to the homepage.

    @Controller
    public class RequestController {
    
        @RequestMapping("/")
        String home() {
            return "home.html";
        }
    }

    Finally home.html (src/main/resources/static) is just declares a title and message.

    <!DOCTYPE html>
    <html>
    <head><title>My App</title></head>
    <body><h1>Hello there</h1></body>
    </html>

    Note: you can clone this basic project from the GitHub repo.

    Next, run:

    mvn spring-boot:run

    You should see the page when browsing to localhost:8080.

    Launch your Spring Boot webapp

    Launch Apache

    Next, we need to fire up Apache. I created an Ubuntu instance on EC2 (check out the AWS Documentation for a getting started guide). I then logged in and installed Apache with the following:

    sudo apt-get install apache2

    This should install and start an Apache server running on port 80. After adding HTTP to the instance inbound security group (again here, the AWS Documentation contains a guide) you should be able to browse to the public DNS.
    Apache Ubuntu default page

    Add LetsEncrypt

    LetsEncrypt has policies against generating certificates for certain domains. amazonaws.com is one of them (because they are normally transient). You need to add a CNAME to a personal domain that points to the instance you created. Here I’m using kewp.net.za.

    The following commands should install SSL certificates to your domain automatically.

    sudo git clone https://github.com/letsencrypt/letsencrypt /opt/letsencrypt
    cd /opt/letsencrypt
    ./letsencrypt-auto --apache -d kewp.net.za

    Browsing to your personal domain should now bring up the Apache homepage with SSL.
    Apache with SSL from LetsEncrypt

    Note: Chrome didn’t like the security of my page (I didn’t get the green icon) because the standard Ubuntu front-end returns unencrypted (http) contents.

    Build a Connector for Spring Boot

    We have to tell Spring Boot to make a connector using AJP, a proxy protocol that connects Apache to Tomcat. To do this dd the following to the bottom of the class in Application.java.

    @Bean
        public EmbeddedServletContainerFactory servletContainer() {
    
            TomcatEmbeddedServletContainerFactory tomcat = new TomcatEmbeddedServletContainerFactory();
    
            Connector ajpConnector = new Connector("AJP/1.3");
            ajpConnector.setProtocol("AJP/1.3");
            ajpConnector.setPort(9090);
            ajpConnector.setSecure(false);
            ajpConnector.setAllowTrace(false);
            ajpConnector.setScheme("http");
            tomcat.addAdditionalTomcatConnectors(ajpConnector);
    
            return tomcat;
        }

    We’re setting the AJP port to 9090 manually. You might want to add a variable to application.properties and pull it in with @Value to make it more configurable.

    Restart the app as above and you should see messages that Tomcat is now listening on both port 8080 and 9090. Note: the Github repository above has the connector code included so you can just use that from the start.

    Run the Application on Your Instance

    In the screenshots above I’ve been running the web app on my local Windows machine for testing. To get it to run on your instance just do the following.

    git clone https://github.com/stormpath/apache-ssl-tutorial
    cd apache-ssl-tutorial
    mvn spring-boot:run

    Reroute Apache

    Now we tell Apache to pass all traffic to our application. We can use the proxy and proxy_ajp modules for that. But first, we need to enable them.

    sudo a2enmod proxy
    sudo a2enmod proxy_ajp

    Now we need to update the virtual host on port 443 to use the connector we created. For me the relevant file was in /etc/apache2/sites-available/000-default-le-ssl.conf. Add the follow to the bottom of the <VirtualHost *.443> element.

    ProxyPass / ajp://localhost:9090/
    ProxyPassReverse / ajp://localhost:9090/

    And at last, restart the server.

    sudo service apache2 restart

    Add Another Security Group

    Now we need to ensure that EC2 is going to allow traffic to HTTPS. Add HTTPS to the inbound security group as before.

    Fire Up Your New (Secure) Spring Boot Application!

    Now when you browse to your domain, you should see our Spring Boot web app, secured behind SSL!

    Spring Boot webapp with SSL

    Configure SSL Between Apache and Tomcat

    One last thing. The traffic between Apache and Tomcat is currently unencrypted (HTTP). This can be a problem for some apps (like Stormpath – which requires a secure connection). To fix this, we use something called Tomcat’s RemoteIpValve. Enable this by adding the following to your application.properties.

    server.tomcat.remote_ip_header=x-forwarded-for
    server.tomcat.protocol_header=x-forwarded-proto

    Apache will set these headers by default and then Tomcat (embedded in Spring Boot) will properly identify the incoming traffic as SSL.

    Add Authentication

    Application security is intrinsic to what we do here at Stormpath. Our team of Java security experts have just released the 1.0 version of our Java SDK, and with it massive updates to our Spring and Spring Boot integrations. You can add authentication for secure user management in this or any Spring Boot application in just 15 minutes! Check out our Spring Boot Quickstart to learn how!

    Spring Boot webapp with Apache and LetsEncrypt SSL

    The post Secure Your Spring Boot WebApp with Apache and LetsEncrypt SSL in 20 Minutes appeared first on Stormpath User Identity API.

    Ludovic Poitou - ForgeRockOpenDJ: Monitoring Unindexed Searches… [Technorati links]

    September 12, 2016 01:41 PM

    FR_plogo_org_FC_openDJ-300x86OpenDJ, the open source LDAP directory services, makes use of indexes to optimise search queries. When a search query doesn’t match any index, the server will cursor through the whole database to return the entries, if any, that match the search filter. These unindexed queries can require a lot of resources : I/Os, CPU… In order to reduce the resource consumption, OpenDJ rejects unindexed queries by default, except for the Root DNs (i.e. for cn=Directory Manager).

    In previous articles, I’ve talked about privileges for administratives accounts, and also about Analyzing Search Filters and Indexes.

    Today, I’m going to show you how to monitor for unindexed searches by keeping a dedicated log file, using the traditional access logger and filtering criteria.

    First, we’re going to create a new access logger, named “Searches” that will write its messages under “logs/search”.

    dsconfig -D cn=directory\ manager -w secret12 -h localhost -p 4444 -n -X \
        create-log-publisher \
        --set enabled:true \
        --set log-file:logs/search \
        --set filtering-policy:inclusive \
        --set log-format:combined \
        --type file-based-access \
        --publisher-name Searches

    Then we’re defining a Filtering Criteria, that will restrict what is being logged in that file: Let’s log only “search” operations, that are marked as “unindexed” and take more than “5000” milliseconds.

    dsconfig -D cn=directory\ manager -w secret12 -h localhost -p 4444 -n -X \
        create-access-log-filtering-criteria \
        --publisher-name Searches \
        --set log-record-type:search \
        --set search-response-is-indexed:false \
        --set response-etime-greater-than:5000 \
        --type generic \
        --criteria-name Expensive\ Searches

    Voila! Now, whenever a search request is unindexed and take more than 5 seconds, the server will log the request to logs/search (in a single line) as below :

    $ tail logs/search
    [12/Sep/2016:14:25:31 +0200] SEARCH conn=10 op=1 msgID=2 base="dc=example,
    dc=com" scope=sub filter="(objectclass=*)" attrs="+,*" result=0 nentries=
    10003 unindexed etime=6542

    This file can be monitored and used to trigger alerts to administrators, or simply used to collect and analyse the filters that result into unindexed requests, in order to better tune the OpenDJ indexes.

    Note that sometimes, it is a good option to leave some requests unindexed (the cost of indexing them outweighs the benefits of the index). If these requests are unfrequent, run by specific administrators for reporting reasons, and if the results are expecting to contain a lot of entries. If so, a best practice is to have a dedicated replica for administration and run these expensive requests. Also, it is better if the client applications are tuned to expect these requests to take a long time.


    Filed under: Directory Services Tagged: directory-server, ForgeRock, index, ldap, opendj, opensource, performance, search, Tips, tuning

    GluuIs Google getting ready to buy Okta? [Technorati links]

    September 12, 2016 10:30 AM

    Be skeptical of “definitions” that collide with marketing imperatives.

    There’s been news recently related to a tightening relationship between Google and Okta. Here’s a quote from a recent ZD Net article:

    “Together, [Okta and Google] will provide a multi-cloud reference architecture. As customers transition to a multi-cloud environment, they’ll be able to use the Okta Identity Cloud to connect to legacy, on-premises technology. Okta and Google are also working together to equip global systems integrators, Google resellers and independent software vendors with training and tools to accelerate the move to the cloud. Additionally, Google has branded Okta as one of its “preferred identity partners” for Google Apps deployments in the enterprise.”

    This is all well and fine. Providing organizations easy access to identity security technology is a good thing. However, a Google acquisition or partnership can have interesting consequences for other vendors in the space.

    Anytime Google picks favorites there is the potential that they might leverage their near-monopolistic position in search to further their agenda. And with regard to the Okta-Google partnership, we’re already seeing a ripple effect in Okta’s positioning among Google search rankings. For example, check out this Definition for the techincal term “inbound saml”:

    inbound saml11   Google Search

    Inbound SAML is industry jargon for a specific use case of the SAML 2.0 open standard, which is by definition vendor neutral. This is a bad definition…

    Before going further, let me provide some context as to why this search is important. The “Inbound SAML” requirement drives revenue, not expense. A company that searches for this term is a valuable prospect. Companies invest in infrastructure rarely, and only when they are forced to do so. The ROI for infrastructure is difficult to calculate. However, the ROI for infrastucture that drives revenue is much more compelling.

    Inbound SAML enables an organization to offer SAML authentication as a front door to their digital service. It’s a common requirement for SaaS providers, who want to make sure they can support the authentication requirements of large enterprise customers. If you have this requirement, you normally don’t wait to do something about it. Frequently there is a valuable customer that needs service soon.

    So Google is giving Okta valuable free advertising for a vendor neutral search term. It’s unfair to websites that actually provide a real definition (organic) and to organizations that pay to advertise for this search term. In fact, you can see in the screenshot that an ad from another vendor, Ping Identity, is completely undermined by the full snippet of Okta’s documentation that is being displayed like a definition.

    To many people (myself included!), results displayed like this carry extra weight. Here are some other searches where Google displays its definition-style results:

    The NL West standings:

    nl west standings   Google Search

    Collusion:

    collusion   Google Search

    How to make a hard boiled egg:

    how to make a hard boiled egg   Google Search

    This type of display is typically reserved for searches that have straightforward and factual results–not vendor promotion. Google’s definition for “inbound saml” is at the very least misleading. It erodes our trust in Google, and undermines the integrity of their platform.

    Google controls much of what we see on the Internet so it is difficult to have an accurate understanding of how search results are manipulated to favor their products and the products and services of their partners. But as a vendor in a space where Google now seems to have a horse in the race, this type of preferential treatment is troubling.

    It also raises the question of why Google is being such a good friend. After the recent acquisition of Apigee, it makes one wonder. Is this a sign that Okta is the next target?

    September 09, 2016

    Paul TrevithickE-commerce and Same-Day Delivery Services [Technorati links]

    September 09, 2016 04:55 PM

    With e-commerce booming, continuing a trend that has now been established long enough for the biggest players to really master the home delivery business model, competition is higher than ever. Getting their goods out there in time to compete with the likes of Amazon and the big retailers can be a major stumbling block for companies across the country, even if all the other pieces are in place. Opting for cheap man and van hire in London for next day or same day courier services may be a suitable answer for many companies, since the UK’s capital is big enough to support healthy competition while not requiring businesses to search too far afield for enough customers to keep them afloat. In fact, in many ways the smaller e-commerce ventures actually have an advantage when it comes to same day delivery.

    Logistics can often be forgotten in the race to build the most impressive online presence, but of course this is a vital aspect of running any online sales business whether you’re delivering food, goods or pretty much anything. Many brands focus too much on image and differentiating themselves, and it all sounds wonderful in theory. Unfortunately, failure to deliver results both literally and figuratively can spell the end for many start-ups. In fact, same day delivery services in particular have been a tough one to crack even for large corporations, so how can small ventures in a city like London do a better job with local same-day courier services?

    For one thing, many companies both large and small have failed because the margins are simply not there on the products they’re trying to sell and deliver on the same day. You need to crack that before you stand a chance of success, and part of this problem inevitably involves scale. Having a reliable business model that works becomes crucial here, because it allows you to hook customers into a subscription program. This means you’re able to cover your costs by charging people in advance for the luxury of same-day deliveries, and the customer is more likely to buy more items from you to make the most of their investment. Building trust can unlock this hidden potential.

    On the face of it, small businesses aren’t going to benefit from the advantage of making hundreds of deliveries in a single round trip. However, there are ways to cut overheads – for example, running a business that picks up and delivers items without the need to store them in a warehouse in between. If your business is basically the front for a local cheap man and van hire service you’re employing, but you’re handling the customer’s needs and catering to them, there’s an opportunity there for big profits, especially if you can establish the trust we mentioned.

    At the moment there’s a particular focus among the bigger delivery companies on same-day deliveries because it’s something that hasn’t worked fantastically for anyone yet, big or small. It will be interesting to see over the next couple of years who really manages to crack this market and master the art of turning e-commerce into a truly convenient consumer solution.

    The post E-commerce and Same-Day Delivery Services appeared first on Incontexblog.org.

    Mike Jones - Microsoft“amr” Values specification addressing WGLC comments [Technorati links]

    September 09, 2016 04:52 PM

    OAuth logoDraft -02 of the Authentication Method Reference Values specification addresses the Working Group Last Call (WGLC) comments received. It adds an example to the multiple-channel authentication description and moves the “amr” definition into the introduction. No normative changes were made.

    The specification is available at:
    • http://tools.ietf.org/html/draft-ietf-oauth-amr-values-02

    An HTML-formatted version is also available at:
    • http://self-issued.info/docs/draft-ietf-oauth-amr-values-02.html

    The specification is available at:

    An HTML-formatted version is also available at:

    Gerry Beuchelt - MITRELinks for 2016-09-08 [del.icio.us] [Technorati links]

    September 09, 2016 07:00 AM

    Matthew Gertner - AllPeersSkills You Need for a Career in Big Data [Technorati links]

    September 09, 2016 02:43 AM
    What skills do you need for a career in big data?Photo by CC user Kayaker~commonswiki on Wikimedia Commons. Image originally made by DARPA (public domain)

    Big data is one of the most significantly growing areas of all business in general, and it’s also proving to a lucrative career area for many people interested in technology and entering a field with a high growth level. According to Forbes, the salary for technical professionals with big data expertise and related skills is $124,000. Additionally, there is virtually no limit to the industries where big data professionals are in high demand. Also according to Forbes, the top five industries where there is the most significant demand for talent with data-related skills include Professional, Scientific and Technical Services, IT, Manufacturing, Finance and Insurance, and Retail Trade.

    So if you’re thinking of entering the business world as a data-related specialist, what are the skills hiring managers most often want to see?

    Problem-Solving Abilities

    While many of the skills you’ll need to have as someone who deals with data involve technical and statistical abilities, there are also soft skills required. Many businesses hiring big data professionals want someone who not just understands the numbers and the technology, but also who is a strong problem solver. Big data professionals’ role in today’s business world is often to serve as someone who takes a problem and then creates a measurable solution, so it’s important to be creative and willing to look outside the box in their thinking. In terms of creativity, it’s necessary to have a sense of curiosity, and the willingness and desire to explore new ways of doing things and create your own solutions.

    Hadoop

    In terms of technical skills and training, Hadoop training is undoubtedly one of the most important things you can have if you want a career involving data. Hadoop is a powerful big data platform, and it can also be tricky, which is why so many different types of companies are looking for people with broad proficiency in the platform. Hadoop training should cover the platform’s framework, and also offer both conceptual and hands-on experience. Many of the best certification and training programs will also include realistic projects to get an understanding of what it’s really like to work in a business using the platform.

    Data Visualization

    Another area of proficiency you should have if you’re pursuing a career in big data or even just technology? Data visualization skills. You’ll likely be expected to take massive amounts of information and transform them into visual elements that will provide both technical and non-technical audiences with an understanding of the insights you’re presenting.

    Programming Languages

    Finally, as well as learning the specifics of big data, it can also be helpful and make you seem more appealing to businesses who want to hire, if you know some general programming languages, such as Java or Python. This might not be an absolute requirement, but it’s a good way to set yourself apart in the competitive business world, particularly when you’re facing other candidates who have extensive big data experience. You might need something that’s distinctive to get the job, and having some knowledge of programming languages can be that distinction.

    The post Skills You Need for a Career in Big Data appeared first on All Peers.

    Matthew Gertner - AllPeersQuality Control is Important at Every Step of the Process [Technorati links]

    September 09, 2016 02:28 AM

    An important part of maintaining full quality management is ensuring the ingredients in your products meet your strict standards. It’s easy enough when you’re completely responsible for the sourcing of every single component, but few organizations can claim this autonomy. Most partner up with a chemical or additive supplier to help them find ingredients or develop entire formulas for their products. Therefore, it becomes a professional imperative to align yourself with a company that provides superior ingredients that reflect not just current marketing trends and safety regulations but your principles, too.

    The question of ingredients arises when you’re looking to update your formula to better reflect consumers’ needs. All-natural, preservative-free, environmentally friendly, and organic products are just some of the growing trends affecting the buying habits of the average North American consumer. As you develop your formula to incorporate these concerns, it’s important that you partner with a chemical supplier that can offer advice as technical chemists and process developers. Cambrian is a top chemical manufacturing company that shares their extensive market and product knowledge in order to provide innovative chemical solutions for your growing needs.

    Sourced from a global supply network, these solutions involve ingredients that will always meet your needs in regards to performance and quality control. They also have the ability to surpass them, as superior chemical distributors like Cambrian unite your North American industry with international ingredient manufacturers. By broadening your sources beyond your market, you don’t just get a trusted source – you also tap into an international market of information regarding chemical and additive regulations. Regardless of the industry in which your business is involved – whether you’re in food processing, pharmaceuticals, or something else entirely – the right chemical supplier can make the latest trends in ingredients and development a reality for your company.

    Quality control is a fundamental part of your company. Its management is a way for you to streamline your business while also ensuring your products deliver on customer satisfaction. While there are many factors involved in maintaining these standards, perhaps the most important is guaranteeing you start with best the ingredients for your products. Their properties should be considered thoroughly before you adopt them, and there’s no better way to vet their inclusion than by teaming up with an experienced chemical distributor. They’re committed to sourcing ingredients that reflect your (and your consumers’) priorities, so you can offer the best quality goods. When it comes to make decisions regarding where you source your ingredients, be sure to find a company you can trust to know their stuff and have the latest technology to back up their efforts.

    The post Quality Control is Important at Every Step of the Process appeared first on All Peers.

    September 08, 2016

    KatasoftTutorial: Setting Up An Awesome Git/CLI Environment on Windows [Technorati links]

    September 08, 2016 09:11 PM

    CLIs, or Command Line Interfaces, are extremely powerful when it comes to accessing the basic functions of your computer. In fact, there are some things you can only do from the command line, even on Windows. Beyond that, many programs just work better on the command line. Take Git, the most widely used modern version control system in the world today; Git was designed exclusively for the command line, and it is the only place you can run every available Git command. (Most GUIs only implement some subset of Git functionality for simplicity.)

    In this tutorial, we will learn how to setup a Git/CLI environment on Windows.

    Install Git on Windows

    Visit the Git website and download the latest Git for Windows Installer (At the time of writing this article the latest version is 2.9.3.) This installer includes a command line version of Git as well as the GUI.

    Once you started the installation, you will see an easy Setup Wizard where the only instruction you need to follow is to select Next and Finish buttons to complete the installation. There is no need to change the default options, the only thing I would like to highlight is that the default terminal emulator is MinTTY instead of Windows console. This is because the Windows console has some limitations. We’ll learn more about these limitations as we walk through the rest of this tutorial.

    Now you are ready to start using git and run your first commands! Open Git Bash and type the following command to verify your installation:

    $ git --version

    Then enter git --help to see all the available commands.

    Congratulations! You’ve just run your first git commands!

    Using Git with PowerShell

    Using Git in PowerShell
    Thanks to our previous git installation, the git binaries path should be already set in your PATH environment variables. To check that git is available, open PowerShell and type git. If you get information related to git usage, git is ready.

    If PowerShell doesn’t recognize the command, you’ll need to set your git binary and cmd path in your environment variables. Go to Control Panel > System > Advanced system settings and select Environment Variables.

    In System Variables, find PATH and add a new entry pointing to your git binaries and cmd, in my case I have them in C:\Program Files\Git\bin and C:\Program Files\Git\cmd.

    Beautify your PowerShell

    If you click on ‘Properties’ right after clicking the small PowerShell icon in the top left corner, you will find several visual features to customize your console just the way you want.

    In ‘Edit Options’ make sure to have ‘QuickEdit Mode’ checked. This feature will allow you to select text from anywhere in PowerShell and copy the selected text with a right-click, and paste it with another right-click.

    You can explore the different tabs, select your preferred font and font size, and even set the opacity to make your console transparent if you are using PowerShell 5.

    Now that you have a nice console with much-needed copy/paste functionality, you need something else to enhance your experience as a git user: you need Posh-Git.

    Posh-Git is a package that provides powerful tab-completion facilities, as well as an enhanced prompt to help you stay on top of your repository status (file additions, modifications, and deletions).

    Posh-git Installation

    To install Posh-git let’s use what we have learned so far about git and PowerShell. Start by creating a folder ‘source’ using the mkdir command:

    PS C:\> mkdir source

    Change your working directory to ‘source’ and type clone command:

    PS C:\> cd source
    PS C:\source> git clone https://github.com/dahlbyk/posh-git.git

    Verify that you are allowed to execute scripts in PowerShell by typing ‘Get-ExecutionPolicy’. The result should be RemoteSigned or Unrestricted. If you get a restricted result, run PowerShell as administrator and type this command:

    PS C:\source> Set-ExecutionPolicy RemoteSigned -Scope CurrentUser -Confirm

    Change your working directory to Posh-git and run the install command:

    PS C:\source\> cd posh-git
    PS C:\source\posh-git> .\install.ps1

    Reload your profile for the changes to take effect:

    PS C:\source\posh-git> $PROFILE

    And you’re done!

    You can make changes to files in your repository and explore Posh-git by typing git status.

    Setup Your SSH Keys

    Usually, you would use HTTPS protocol to communicate with the remote Git repository where you are pushing your code. This means that you must supply your credentials (username and password) every time you interact with the server.

    If you want to avoid typing your credentials all the time, you can use SSH to communicate with the server instead. SSH stands for Secure Shell. It is a network protocol that ensures that the communication between the client and the server is secure by encrypting its contents.

    SSH is based on public-key cryptography, so in order to authenticate via SSH to the Git repository you need to have a pair of keys: one public (which will reside on the server) and one private (the one you and only you will use to authenticate to the server). When you supply your private key to the server, it will verify it matches the installed public key, and if it does, then you will be authenticated.

    To access your Git repositories you will need to create and install SSH keys. You can do this with OpenSSH which already comes installed with Git. To generate your key pair open Git Bash and enter the following command:

    $ ssh-keygen -t rsa -b 4096

    This will generate a key pair using RSA as the key type and will use 4096 bits for it. It will then prompt you to enter a location to save the key. If you press Enter, it will be saved in the default location.

    Enter a file in which to save the key (/Users/you/.ssh/id_rsa): [Press enter]

    Then you will be prompted to enter a passphrase. Type a secure one:

    Enter passphrase (empty for no passphrase): [Type a passphrase]
    Enter same passphrase again: [Type passphrase again]

    And that’s it! You have just created an SSH key pair. Easy, isn’t it? Now you have to add your SSH key to the ssh-agent. First of all, let’s make sure ssh-agent is installed by typing:

    $ eval "$(ssh-agent -s)" [Press enter]
    > Agent pid 8692

    Then, add your SSH key to the ssh-agent:

    ssh-add ~/.ssh/<your_private_key_file_name>

    You now have your private key installed on your computer, but you need to set the public key on the Git remote repository. This step depends on which Git hosting service you are using. There are tutorials available for both GitHub and BitBucket, the two most popular services.

    Use Console Emulators for Improved CLI Experience

    You may be asking yourself “Why should I use a console emulator on Windows instead of the native cmd?” The answer is simple: Console emulators let you choose which shell to run on them and provide you with a variety of configuration options, both for utility and aesthetics.

    Most emulators also support multiple tabs. On each tab, you can run a different shell, or, if you work with multiple git repositories, you could configure multiple tabs pointing to your different working directories. There are emulators that can save the state of each, so when you open up your emulator again they will be there just as you left them.

    Also, if you want to improve your productivity you can configure some hot-keys to speed up repetitive tasks or even use some very useful commands like cat or grep. There are several alternatives that offer lots of functionality and integrate very well with Windows. Let’s review some of them:

    ConEmu

    ConEmu allows you to run “console” applications such as cmd.exe, powershell.exe, Far Manager, bash, etc., and “graphical” applications like Notepad, PuTTY, KiTTY, GVim, Mintty and so on. Given that it is not a shell, it does not provide some standard shell features like remote access, tab-completion or command history.

    You can pre-configure tabs, give them custom names as well as Shell scripts to run when they open, plus additional configuration options; nearly everything about ConEmu can be customized.

    Also, you can search all the text that has been printed or entered in the console history, resize the main window as much as you want, and check the progress of an operation with a quick glance at the taskbar, without bringing the app to the foreground.

    Installation is super easy: just unpack or install to any folder and run ConEmu.exe.
    ConEmu

    Cmder

    Cmder is an improved version of ConEmu. It combines all ConEmu features with cmd enhancements from clink (such as bash-style completion in cmd.exe and PowerTab in powershell.exe) and Git support from msysgit. Current branch name is shown on the prompt. This feature is built-in, so you don’t need to install any extension like we did for PowerShell.

    With Cmder you can run basic unix commands like grep. Also, you can define aliases in a text file for common tasks or use the built-in aliases like .e which opens an Explorer window at your current location. Installation is easy: choose and download your preferred Cmder version (mini or full), unzip files and run Cmder.exe.

    cmder

    Console2

    With Console2 you can not only create as many tabs as you want but also name them individually based on what is running on each. Also, you can assign a Shell script to each tab, automated to run on open.

    You can even customize the keyboard shortcuts (like changing Open New Tab to Ctrl+T) and the appearance (like font, colors, and size).

    Console2 does have some drawbacks. The first time I tried to configure a new tab, following the normal flow (settings > tabs > shell : git shell path), in order to point to git shell, the tab was opened in a separated window, outside of the Console2 context. It took me a while to find out how to configure Console2 to open the git shell as a new tab inside of its context. If you need a hand with this, you should check this link.

    Also, it lacks the functionality to allow multiple tabs to automatically run predefined scripts. Instead, you have to open everything manually every time you start the application.
    Console2

    ConsoleZ

    As a fork of Console2, ConsoleZ should look quite familiar, and it will recognize all of your Console2 custom settings. If you are already using Console2, you should give it a try.

    Besides the Console2 features there are many more options in nearly all the Settings panels, like splitting Tabs into views (horizontally and vertically), settable opacity of text background color, snippets and zooming.

    As with Console2, ConsoleZ is not able to open pre-created tabs on startup.
    ConsoleZ

    PowerCmd

    PowerCnd offers similar features to the others listed above, but it has other cool features, such as auto-log (which prevents you losing your work by saving the output of your consoles automatically), AutoCompletion for files under current directory and Bookmarks with the ability to move between them easily.

    Also, you can save and restore your command line sessions from last time. This emulator isn’t free, but does offer a free trial so you can take it for a test drive.
    PowerCMD

    Go Forth and Experiment

    You have so many options and enough information to start with the command line. There are no excuses for not starting to play with it using a good emulator! So, choose your favorite one, clone your repository and continue running your git commands on the CLI.

    Interested in learning more about git commands and CLI tools? Check out these resources:

  • Building Simple Command Line Interfaces in Python
  • Set Up a Smoking Git Shell on Windows
  • Git in Powershell
  • The post Tutorial: Setting Up An Awesome Git/CLI Environment on Windows appeared first on Stormpath User Identity API.

    Matthew Gertner - AllPeersHow Can Board Portals Save You Time and Money? [Technorati links]

    September 08, 2016 07:53 PM

    How Can Board Portals Save You Time and Money?Photo by CC user kaboompics on Pixabay

    Meetings are a pain. You thought that it would be easy, that it would stick to your schedule. But someone called you up and said that the place you booked two days ago just closed down.

    You tried to reschedule, you calmly told yourself that it’s just a minor setback. On your second attempt, you realized that the attendees weren’t properly informed about the meeting so you ended up explaining the agenda of proceeding with the meeting.

    Those are just two of the many scenarios that may happen when planning a meeting – not to mention all the problems that may happen during a meeting.

    Entrepreneur.com stated that in America, unproductive meetings cost $37 billion a year. With that amount of money, companies could’ve invested in new ventures, paid higher wages or even used it to help others. It’s tragic to see all these resources and opportunities go down the drain and it’s all because of poorly managed meetings.

    Luckily, board portals are here to change the paradigm. Board portals are meeting management apps; they are built to make meetings better and faster. How can board portals save you time and money? Read on below…

    Less is more

    A normal meeting requires a venue, documents and presentation tools. These are all expensive resources. However, board portals can easily do all those things at a fraction of the cost.

    Board portals are digital meeting rooms that provide attendees all the tools they need to properly conduct a meeting.

    No one gets left behind

    Punctuality and attendance are big factors in time extensions. If one of the attendees is late then the meeting will have to start late, and if one is absent then they would have to reschedule the whole meeting, making it frustrating for others.

    Board portals can create notifications so that all the attendees won’t forget their meetings. It’s similar to push notification of social media sites.

    Smart data for smart meetings

    Proper data creates proper answers, proper answers create proper solutions, proper solutions save everyone’s time and effort.

    By collecting the record of the members, the meeting organizers can easily check the time and availability of each member. This way, they can make more time-friendly meetings and avoid unnecessary rescheduling.

    Remote access for success

    Stuck in traffic? Can’t go to work because of the unforeseen house problems? Flat tire? Not a problem.

    With remote access, users can easily access their meetings as long as they have Wi-Fi or mobile data and a device with a board portal app installed. This feature also eliminates venue issues and other possible distractions such as picking up the right clothes to wear or wishing that the streets are not riled with traffic.

    This is very important for emergency meetings, impromptu checks, and quick office huddles.

    A lean mean scheduling machine

    When you set the time, you have to start on time. Without a proper schedule, a meeting can be stretched, chopped up, or postponed. A good way to do this is to tell everyone that each item on the meeting has a schedule or you can set the board portal to do it for you. By creating a schedule, you can set expectations, answers, and feedback.

    Setting agendas like you mean it

    Without an agenda, a meeting is just as good as a friendly get-together. It sounds fun, but it will probably waste a lot of time and money.

    Being able to set the agenda and its documents is a lifesaver for any organizer. This sets the tone and gravity of the meeting. It also helps the attendees prepare and organize themselves so that they can participate in the meeting.

    Power tools for a power meeting

    Presentations are the interactive part of the meeting. Usually a presenter shows and explains the documents to make their point across.

    Board portals follow the same concept, with simple tools such as highlights, footnotes and drawing tools. These tools help the presenter give emphasis or directions on how to tackle a certain agenda item.

    A clear voting system

    A vote is like a thousand words— ­­it’s a compressed version of a person’s decisions, beliefs and response. Voting systems avoid a lot of possible chit chat, justifications, and possible shift of decision due to peer pressure.

    And it ultimately erases the discussion of company politics inside the meeting. If ever fellow members want to discuss about politics, then they would have to do it after the meeting.

    The goal is in your hands

    If we can shorten it, then do it. Think of board portals like Azeus Convene as the natural progression of meetings. They make meetings more objective, less repetitive, and highly interactive.

    Traditionalists might still go with face to face meetings, but with Convene as the meeting medium, you can now achieve paperless meetings, avoid wasting time, and solve agendas with a click.

    Start making your meetings more productive. Who knows? It might help you save $37 billion.

    The post How Can Board Portals Save You Time and Money? appeared first on All Peers.

    Matthew Gertner - AllPeersWill Identity Theft Be Your Business? [Technorati links]

    September 08, 2016 05:10 PM
    Will Identity Theft Be Your Business?Photo by CC user Marcos Tulio on publicdomainpictures.net

    Could your business withstand being the victim of identity theft?

    While some companies can survive such a matter, others would either need significant time to recover or would never recover at all.

    That said what is your business doing to steer clear of identity theft thieves?

    Close the Doors on I.D. Theft

    In order for your business to do its best in closing the doors on identity theft, keep these tips in mind:

       

    1. Plan – First and foremost, what kind of plan do you have set up to negate identity theft as much as possible? Unfortunately, some business owners are of the opinion that I.D. theft can’t happen to them, so they therefore do not have to guard against it. That line of thinking can be one of the most destructive ones possible, especially as identity theft thieves continue to try and exploit businesses and consumers at each and every turn. Always be on guard for identity theft, avoiding the idea that you and your business are untouchable. If you’re not protecting customer identities, you are setting yourself up for quite a fall;
    2.  

    3. Employees – Your workers play several roles when it comes to the identity theft and your business. First, they are a great line of defense against the problem, especially since they deal with clients on a firsthand basis. Make sure they stay cognizant of what is going on both online and off, looking for any red flags that may suggest your brand is being targeted for I.D. theft. Secondly, as much as you want to trust those you hire (and you should), there is always the possibility that one or more of your workers will in fact by identity theft criminals themselves. It should not come as a huge surprise that some businesses have been successfully targeted for I.D. theft by those right under their noses. As a result, the crime may go unnoticed for a period of time. If you suspect one or more of your workers are engaging in identity theft against you, an immediate investigation needs to take place. Remember, each day you let go by without looking into the matter is one more day you could lose money and/or clients;
    4.  

    5. Education – Being educated about the dangers of identity theft is a necessity, not a choice. As a business owner, you have a responsibility to not only your customers, but also your employees to keep your brand as removed as possible from I.D. theft. If an identity theft attack is successful against your business, it could put you and your team out of work (depending on the severity of it). Being you run a business, it is important that you are as educated as possible about how identity theft works, what types of businesses are typically targeted, and how to recover from such an attack without having to close up shop. There are plenty of articles online about how to combat identity theft, not to mention videos too. Follow up on a number of these pieces to learn more about whether or not your brand is significantly at risk;
    6.  

    7. Warnings – Finally, do you know the telltale signs of identity theft? If not, get up to speed sooner rather than later. For instance, if your company’s financial books are not adding up, there could be something fishy going on. The same holds true for any company credit cards showing differing balances than what they should. Also look at whether any employees have been acting strange as of late, especially those who may be charged with doing your accounting tasks etc.

     

    The negative fallout from even one successful identity theft attack against your business could be catastrophic, so do not take the matter lightly.

    If you are not guarding against identity theft, you are making it easier for such criminals to strike.

    When you have a protection monitoring plan in place to cover all of your financial undertakings, you educate yourself and your workers on the dangers of I.D. theft, and you regularly review your safeguards in place, you greatly reduce the odds of being the next victim. Will identity theft be your business? If you care about your livelihood, you certainly will. 

    The post Will Identity Theft Be Your Business? appeared first on All Peers.

    Mike Jones - MicrosoftInitial Working Group Draft of OAuth Token Binding Specification [Technorati links]

    September 08, 2016 04:24 PM

    OAuth logoThe initial working group draft of the OAuth Token Binding specification has been published. It has the same content as draft-jones-oauth-token-binding-00, but with updated references. This specification defines how to perform token binding for OAuth access tokens and refresh tokens. Note that the access token mechanism is expected to change shortly to use the Referred Token Binding, per working group discussions at IETF 96 in Berlin.

    The specification is available at:

    An HTML-formatted version is also available at:

    September 07, 2016

    KatasoftTutorial: Build a Spring WebMVC App with Primefaces [Technorati links]

    September 07, 2016 03:45 PM

    Primefaces is a JavaServer Faces (JSF) component suite. It extends JSF’s capabilities with rich components, skinning framework, a handy theme collection, built-in Ajax, mobile support, push support, and more. A basic input textbox in the JSF tag library becomes a fully-featured textbox with theming in Primefaces.

    Frontend frameworks like AngularJS provide UI components, Ajax capabilities, and HTML5 compliance much like Primefaces does. If you are looking for a lightweight application with quick turnaround time, AngularJS could be your best bet. However, when dealing with an enterprise Java architecture, it is often best to use a mature framework like Primefaces. It is stable and ever-evolving, with the help of an active developer community.

    Primefaces also makes a UI developer’s life easier by providing a set of ready-to-use components which, otherwise, would take a considerable amount of time to code – e.g., the dashboard component with drag and drop widgets. Some other examples are slider, autocomplete components, tab views for pages, charts, calendars, etc.

    Spring WebMVC and Primefaces

    In Spring WebMVC, components are very loosely coupled. It is easy to integrate different libraries to the model layer or the view layer.

    In this tutorial, I am going to walk you through using Spring WebMVC and Primefaces to create a basic customer management application with a robust frontend. All the code can be found on Github.

    Create a Maven Project

    Create a new Maven Project using your favorite IDE. After creating the project, you should see the pom.xml in the project folder. A minimal pom.xml should like this:

    <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    
    <groupId>com.stormpath.blog</groupId>
    <artifactId>SpringPrimefacesDemo</artifactId>
    <version>0.0.1-SNAPSHOT</version>
    <packaging>war</packaging>
    
    <name>SpringPrimefacesDemo</name>
    <url>http://maven.apache.org</url>
    
    <properties>
    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    </properties>
    
    </project>

    Add Spring Libraries

    Next, add the necessary Spring libraries to the dependencies section of the pom.xml.

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
        <jdk.version>1.7</jdk.version>
        <spring.version>4.3.2.RELEASE</spring.version>
    </properties>
    
    <dependencies>
        <dependency> 
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>3.8.1</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.springframework</groupId>
            <artifactId>spring-webmvc</artifactId>
            <version>${spring.version}</version>
         </dependency>
    </dependencies>

    Create Your Sample Project with Spring WebMVC

    For the customer management application we are going to build, we need to create a mock customer database. It will be a POJO with three attributes. The Customer class would look like this:

    package com.stormpath.blog.SpringPrimefacesDemo.model;
    
    public class Customer {
    
        private String firstName;
        private String lastName;
        private Integer customerId; 
    
        public String getFirstName() {
            return firstName;
        }
        public void setFirstName(String firstName) {
            this.firstName = firstName;
        }
        public String getLastName() {
            return lastName;
        }
        public void setLastName(String lastName) {
            this.lastName = lastName;
        }
        public Integer getCustomerId() {
            return customerId;
        }
        public void setCustomerId(Integer customerId) {
            this.customerId = customerId;
        }
    }

    Then we need to create a bean class to manipulate the Customer class:

    package com.stormpath.blog.SpringPrimefacesDemo.presentation;
    
    import java.util.ArrayList;
    import java.util.List;
    
    import javax.annotation.PostConstruct;
    import javax.faces.bean.ManagedBean;
    import javax.faces.bean.ViewScoped;
    
    import com.stormpath.blog.SpringPrimefacesDemo.model.Customer;
    
    @ManagedBean
    @ViewScoped
    public class CustomerBean {
        private List<Customer> customers;
    
        public List<Customer> getCustomers() {
            return customers;
        }
    
        @PostConstruct
        public void setup()  {
            List<Customer> customers = new ArrayList<Customer>();
    
            Customer customer1 = new Customer();
            customer1.setFirstName("John");
            customer1.setLastName("Doe");
            customer1.setCustomerId(123456);
    
            customers.add(customer1);
    
            Customer customer2 = new Customer();
            customer2.setFirstName("Adam");
            customer2.setLastName("Scott");
            customer2.setCustomerId(98765);
    
            customers.add(customer2);
    
            Customer customer3 = new Customer();
            customer3.setFirstName("Jane");
            customer3.setLastName("Doe");
            customer3.setCustomerId(65432);
    
            customers.add(customer3);
            this.customers = customers;
        }
    }

    Create the Frontend with Primefaces

    Since we are going to add Primefaces components to our UI, we will need a UI with JSF capabilities. Add the JSF dependencies to your pom.xml:

    <properties>
           …..
            <servlet.version>3.1.0</servlet.version>
            <jsf.version>2.2.8</jsf.version>
           …..
     </properties>
    …
    
    <dependency>
        <groupId>javax.servlet</groupId>
        <artifactId>javax.servlet-api</artifactId>
        <version>${servlet.version}</version>
        <scope>provided</scope>
    </dependency>
    <dependency>
        <groupId>com.sun.faces</groupId>
        <artifactId>jsf-api</artifactId>
        <version>${jsf.version}</version>           
    </dependency>
    <dependency>
        <groupId>com.sun.faces</groupId>
        <artifactId>jsf-impl</artifactId>
        <version>${jsf.version}</version>            
    </dependency>

    Note: If your target server is a Java EE compliant server like jBoss, the JSF libraries will be provided by the server. In that case, the Maven dependencies can conflict with the server libraries. You can add scope provided to the JSF libraries in the pom.xml to solve this.

    <dependency>
        <groupId>com.sun.faces</groupId>
        <artifactId>jsf-api</artifactId>
        <version>${jsf.version}</version>
        <scope>provided</scope>
    </dependency>
    <dependency>
        <groupId>com.sun.faces</groupId>
        <artifactId>jsf-impl</artifactId>
        <version>${jsf.version}</version>
        <scope>provided</scope>
    </dependency>

    Create a web deployment descriptor – web.xml. The folder structure needs to be as shown below (the other files referenced will be created below):

    webapp/
    ├── META-INF
    │   └── MANIFEST.MF
    ├── WEB-INF
    │   ├── faces-config.xml
    │   └── web.xml
    └── index.xhtml

    web.xml content:

    <?xml version="1.0" encoding="UTF-8"?>
    
    <web-app xmlns="http://java.sun.com/xml/ns/javaee"
             xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
             xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd"
             version="3.0">
    
        <servlet>
            <servlet-name>Faces Servlet</servlet-name>
            <servlet-class>javax.faces.webapp.FacesServlet</servlet-class>
            <load-on-startup>1</load-on-startup>
        </servlet>
        <servlet-mapping>
            <servlet-name>Faces Servlet</servlet-name>
            <url-pattern>*.xhtml</url-pattern>
        </servlet-mapping>
        <servlet-mapping>
            <servlet-name>Faces Servlet</servlet-name>
            <url-pattern>/faces/*</url-pattern>
        </servlet-mapping>
        <welcome-file-list>
            <welcome-file>faces/index.xhtml</welcome-file>
        </welcome-file-list>
    </web-app>

    Create faces-config.xml in the WEB-INF folder:

    <?xml version="1.0" encoding="UTF-8"?>
    <faces-config
        xmlns="http://xmlns.jcp.org/xml/ns/javaee"
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="http://xmlns.jcp.org/xml/ns/javaee http://xmlns.jcp.org/xml/ns/javaee/web-facesconfig_2_2.xsd"
        version="2.2">
    
    </faces-config>

    Add index.xhtml to the webapp folder.

    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
    <html xmlns="http://www.w3.org/1999/xhtml"
    xmlns:h="http://java.sun.com/jsf/html"
    xmlns:f="http://java.sun.com/jsf/core"
    xmlns:ui="http://java.sun.com/jsf/facelets">
    
        <h:head></h:head>
        <body>
            <h1>Spring MVC Web with Primefaces</h1>
        </body>
    </html>

    Note the XML namespaces for the JSF included in the xhtml. Now we can add the the proper dependencies to pom.xml.

    …
    <properties>
        ...
        <primefaces.version>6.0</primefaces.version>
    </properties>
    
    ...
    <dependency>
        <groupId>org.primefaces</groupId>
        <artifactId>primefaces</artifactId>
        <version>${primefaces.version}</version>
    </dependency>

    Finally, add a class implementing WebApplicationInitializer interface. This will be a bootstrap class for Servlet 3.0+ environments, to start the servlet context programmatically, instead of (or in conjunction with) the web.xml approach.

    package com.stormpath.blog.SpringPrimefacesDemo;
    
    import javax.servlet.ServletContext;
    import javax.servlet.ServletException;
    import org.springframework.context.annotation.ComponentScan;
    import org.springframework.context.annotation.Configuration;
    import org.springframework.web.WebApplicationInitializer;
    import org.springframework.web.context.ContextLoaderListener;
    import org.springframework.web.context.support.AnnotationConfigWebApplicationContext;
    import org.springframework.web.servlet.config.annotation.EnableWebMvc;
    
    @EnableWebMvc
    @Configuration
    @ComponentScan
    public class WebAppInitializer implements WebApplicationInitializer {
    
        @Override
        public void onStartup(ServletContext sc) throws ServletException {
            AnnotationConfigWebApplicationContext context = new AnnotationConfigWebApplicationContext();
            sc.addListener(new ContextLoaderListener(context));
        }
    }

    Configure Primefaces

    Now we will modify the index.xhtml file and create a data table to display the customer data. The xml namespace needs to be modified to add Primefaces reference.

    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
    <html xmlns="http://www.w3.org/1999/xhtml"
    xmlns:h="http://java.sun.com/jsf/html"
    xmlns:f="http://java.sun.com/jsf/core"
    xmlns:ui="http://java.sun.com/jsf/facelets"
    xmlns:p="http://primefaces.org/ui">
    
        <h:head></h:head>
        <body>
            <h1>Spring MVC Web with Primefaces</h1>
            <p:dataTable var="customer" value="#{customerBean.customers}" widgetVar="customerTable" emptyMessage="No customers found">
                 <p:column headerText="Id">
                     <h:outputText value="#{customer.customerId}"/>
                 </p:column>
                <p:column headerText="First Name">
                    <h:outputText value="#{customer.firstName}"/>
                </p:column>
                <p:column headerText="Last Name">
                    <h:outputText value="#{customer.lastName}"/>
                </p:column>
            </p:dataTable>
        </body>
    </html>

    Deploy to Your Application Server (and Test)

    Build the project, deploy the war to the application server and check.

    Extended Capabilities

    Modify the code as shown below to easily produce a sortable data table with filters. Add the following line to CustomerBean.java:

    private List<Customer> filteredCustomers;

    …and:

    public List<Customer> getFilteredCustomers() {
        return filteredCustomers;
    }
    
    public void setFilteredCustomers(List<Customer> filteredCustomers) {
        this.filteredCustomers = filteredCustomers;
     }

    Modify index.xhtml to:

    <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
    <html xmlns="http://www.w3.org/1999/xhtml"
    xmlns:h="http://java.sun.com/jsf/html"
    xmlns:f="http://java.sun.com/jsf/core"
    xmlns:ui="http://java.sun.com/jsf/facelets"
    xmlns:p="http://primefaces.org/ui">
    
        <h:head></h:head>
        <body>
            <h1>Spring MVC Web with Primefaces</h1>
            <h:form>
                <p:dataTable var="customer" value="#{customerBean.customers}" widgetVar="customerTable" emptyMessage="No customers found" filteredValue="#{customerBean.filteredCustomers}">
                    <p:column headerText="Id" sortBy="#{customer.customerId}" filterBy="#{customer.customerId}">
                        <h:outputText value="#{customer.customerId}"/>
                    </p:column>
                    <p:column headerText="First Name" sortBy="#{customer.firstName}" filterBy="#{customer.firstName}">
                        <h:outputText value="#{customer.firstName}"/>
                    </p:column>
                    <p:column headerText="Last Name" sortBy="#{customer.lastName}" filterBy="#{customer.lastName}">
                        <h:outputText value="#{customer.lastName}"/>
                    </p:column>
              </p:dataTable>
        </h:form>
    </body>
    </html>

    More on Primefaces

    All the UI components available with Primefaces have been showcased at Primefaces Showcase.

    Apart from the components extended from the JSF Tag library, Primefaces has a lot of versatile components and plugins known as Primefaces extensions. They are meant to make developers’ lives easier and our web pages more beautiful.

    And now, it’s time to add authentication to your Primefaces webapp! Learn more, and take Stormpath for a test drive, with these resources:

  • A Simple WebApp with Spring Boot, Spring Security, & Stormpath — In 15 Minutes!
  • 5 Practical Tips for Building Your Spring Boot API
  • OAuth 2.0 Token Management with Spring Boot and Stormpath
  • The post Tutorial: Build a Spring WebMVC App with Primefaces appeared first on Stormpath User Identity API.

    MythicsDramatically Increasing a Competitive Advantage with the Oracle Database Appliance [Technorati links]

    September 07, 2016 12:55 PM

    See a fantastic new client spotlight by Oracle and Mythics highlighting our customer Clinispace and their use of the Oracle Database Appliance (ODA) with expert…

    Matthew Gertner - AllPeersThe Beginner’s Guide to Starting a Blog [Technorati links]

    September 07, 2016 12:40 AM

    In case you didn’t know, blogs (a term that is a contraction of weblogs) first became popular during the 1990s when people started to write online articles about their favorite topics, such as traveling, food, sports, health, business, fashion and lifestyle choices. Since then blogging has gone from strength to strength. In fact, pick any subject at all and you’ll most likely find somebody, somewhere is blogging about it. If you like the idea of communicating information, knowledge, inspiration or skills, or even if you just want to entertain folk and make new friends, here are a few tips to get you started.

    cup-mug-desk-office

    Your domain name

    If you already know what you want to write about then you need to think about bagging some personal space on the internet and for this you’ll need a domain name. Some people use their own name and others go with a preferred theme if that’s appropriate, opting to use this in the title. Examples might include parenting, cooking, hobbies or pets for instance. If you want to create a community that shares interests or experiences, you might opt for using a title that reflects this – our own website focuses on the meeting of minds, for example.

    Remember that you may find your first choice is not available: after all if you have opted for a really good title someone else may have taken it. For a personal website you can always add a middle initial or extra dashes if your first choice is already in use. To register your new domain you can opt for a free site, such as Blogger, Word Press or Tumblr. You’ll get really helpful tips on these sites about how to lay out your text, place images and screen videos to create the effect you want. Take the time to visit a few sites you really like and make a mental note of how they organize their look.

    Your posts

    Once your blog is set up you can start writing and posting your articles to it. As with any word processor software you will have options to choose the titles of your posts, the fonts, justification and colors. Your blog menu will also contain choices for uploading and editing images and with a little practice you’ll be able to assemble posts that look as good as they read.

    If you are going to write about a specialist subject, such as health and fitness for example, it’s best to take some time to do proper research. After all, you don’t want to find you are giving people incorrect information, or merely rehashing the contents of someone else’s blog.

    In fact, the quality of your blog content – what and how you actually write, plus how you present it – makes a tremendous difference to the amount of attention it will get. In terms of presentation you need to use great images and short but sharp videos to make your content interesting and catch the eye of viewers. Use a reputable company such as Dreamstime as a resource for top quality stock images and video footage. If you get it right, a picture really does paint a thousand words.

    Promoting your blog

    This is an area where you can’t afford to be shy. If you don’t promote your blog, no one will read it and you may feel you might as well not have bothered in the first place. Instead, use all the free channels at your fingertips and make the most of your social media connections. Most programs allow you to link to social media outlets such as Twitter and Facebook as soon as you publish a post. There’s also no harm in sending out a gentle reminder to friends and followers in case some people missed your announcement first time round.

    Don’t be afraid to comment in a useful way on other blogs that are similar or on the same topic. That’s how you gather new followers. If you comment often you can also link to another blog, creating a ‘trackback’ link to your own blog.

    Some bloggers like to offer guest posts to others. These means you could write something for another blogger’s site and promote your own site at the same time. In return, you can also offer this facility to other bloggers, especially if they are writing about similar themes or topics. It’s best to make sure your ideas are more or less in tune with theirs, of course.

    Online forums are also a good way to spread the word about your own writing. You will most probably find that if you make useful or insightful comments on a regular basis, then people will want to know more about who you are and what you write.

    Regular posting

    This is one of the hurdles that some would-be bloggers never manage to overcome. As well as aiming for high quality writing and top class images you also need to post regularly. Every couple of days is best but at least once a week is essential, as otherwise people will simply forget that you and your blog are there. Frequency of posts is also a way to ensure that people will make repeat visits to your blog, in case they’ve missed anything.

    Earning money from your blog

    Finally, as a beginner blogger you may be beguiled by the idea that you can actually make money from your blog. This is certainly true providing you have achieved a sizeable following – advertisers are willing to pay you for the opportunity to place ads on your site. The level of visibility of these ads has to be carefully judged, however, because you won’t want to put off site visitors if the ads are too dominant.

    Also, your chosen topics or themes will make a difference because advertisers will use these to judge the demographic your blog is reaching. For example, business or finance topics are likely to appeal to those with a healthy income, whereas a blog about boy bands is likely to attract young teenagers who are less interesting to advertisers in this respect.

    The post The Beginner’s Guide to Starting a Blog   appeared first on All Peers.

    September 06, 2016

    Matthew Gertner - AllPeersKaren Phillips About The Work Done At Phillips Charitable Organization [Technorati links]

    September 06, 2016 07:34 PM

    There are many different charitable organizations around the world. The Phillips Charitable Organization is one of them. It is a foundation that is really interesting and that did manage to help many people in the past few years. Karen, the wife of Charles Phillips, Infor’s CEO, talk to us a little about the work that is done and about the Phillips Charitable Organization. Here is what we found out.

    The Phillips Organization (full name Karen And Charles Phillips Charitable Organization) is basically a 501(c) non-profit business created in order to offer as much financial aid as possible for the disadvantaged students and the single parents interested in the world of engineering. At the same time, because of the fact that Charles Phillips served in the Marine Corps in the past, the organization does offer help to the wounded veterans. The PCO board includes Karen and Charles Phillips and 2 of their closest friends: Young Huh and Eric Garvin.

    maxresdefault

    Most of the programs that are developed by the organization are based on grants. There are grants that are offered to the students that are in dire need of financial aid and that present a really high potential talent. In the past 2 years alone we saw this organization offer over one hundred grants. They were highly successful and did help the students to reach great results, with most of them being considered as being the wave of the future in US engineering.

    The work that is done through this foundation is not something new for the PCO board members. All four are friends and did offer a lot of money to traditional charities in the past years. The traditional organizations were all really helpful when they were the donor. The effort that was put in donating was definitely not much but these 4 friends felt that they can get more involved. Large organizations are unfortunately filled with bureaucracy. Karen and Charles Phillips did not want to go through that bureaucracy.

    What is interesting with the charity is that it was created in basically the same way that Charles does business. The focus was put on the working environment and on efficiency while minimizing running costs. This means that the finances necessary to run the charity are much lower than with other companies. Obviously, Infor based cloud apps are actually used in order to improve charity success.

    The charity that was created by the friends does not have administrative overhead. All the decisions can be done really fast. The interest groups that appear are really easy to analyze. To put it as simple as possible, all the board members can be involved at a personal level. Why not take advantage of such a situation? It is so much easier to offer grants to those that are in dire need when the number of people that are responsible for making the choice is lower.

    The work of the organization is going to continue in the future. It will definitely be something that we have to look at.

    The post Karen Phillips About The Work Done At Phillips Charitable Organization appeared first on All Peers.

    Mike Jones - MicrosoftSecond public draft of W3C Web Authentication Specification [Technorati links]

    September 06, 2016 04:54 PM

    W3C logoThe W3C Web Authentication working group has announced publication of the second public draft of the W3C Web Authentication specification. The working group expects to be issuing more frequent working drafts as we approach a Candidate Recommendation.

    CA on Security ManagementHow security enables digital transformation [Technorati links]

    September 06, 2016 02:00 PM
    Apparently, many enterprises still view security and innovation as opposing forces that need to be chosen between or—at best―balanced. Reading a recent CIO article titled,… The post How security enables digital transformation appeared first on Highlight.

    &nbsp;

    Paul TrevithickAdaptive, responsive and mobile friendly sites [Technorati links]

    September 06, 2016 10:56 AM

    banner-1023856_960_720

    Are adaptive, responsive and mobile friendly all the same? The answer is not quite. Let’s take a look at the differences between them.

    Difference between responsive and friendly sites

    When you think of a mobile friendly site, many people assume that it’s specifically designed for a mobile device. However, they are in fact a website interface that works with all kinds of devices.

    So the difference between a responsive and friendly site? A responsive mobile site will alter its view based on which device it is viewed on. As for a mobile friendly site, a standard desktop site will look the same but only that it is shown on a smaller scale. A responsive mobile site will show and change into a single-column design that will fit on the device screen.

    In simple words, a responsive site is always mobile-friendly. They have a similar feature of a mobile friendly site. But the main difference is that a responsive website shows a better navigation of spacing and will always adapt to a mobile operating system.

    Difference between adaptive and responsive

    An adaptive and responsive sites are pretty similar but different in practice. The similarity is that they both change their dimensions based on the device it is viewed on. The main difference between them is that a responsive site will adjust to any layout. On the other hand, the adaptive sites will only adapt to selected points.

    Which type of site should you use?

    All in all, your choice will depend on what kind of site you have and where you get most of your traffic. For instance, unless your traffic is largely viewed on a mobile device, then you may want to opt in for an adaptive or responsive site. However, if your traffic is low on mobile devices, then it’s advisable that you should just simply opt in for a mobile-friendly site. An adaptive and responsive sites aren’t always necessary in this case.

    The post Adaptive, responsive and mobile friendly sites appeared first on Incontexblog.org.

    August 31, 2016

    Matthew Gertner - AllPeersMissing The Sun, Sea and Sand? Simple Tricks To Get that ‘Beach Look’ For Your Home [Technorati links]

    August 31, 2016 09:22 PM

    Don’t you wish you could spend all day, every day on the beach? Well, you can – sort of!

    If you love nothing more than feeling the sun on your back and sand in your toes as the turquoise water laps at the shore and the palm trees blow softly in the summer breeze, why not re-create it in your home?

    The beach is a happy place for many – a place where the stresses and worries of everyday life are forgotten and relaxation ensues – a place where loved ones can have fun and spend quality time together. The way you feel on the beach is exactly how you want to feel at home, isn’t it?

    So, whether you live near the beach or just dream about the ocean, here is how to get the look for your home.

    Colour Scheme:

    White is a go-to colour for beach houses – it gives a clean, calm and luxurious feel, but pastels also work really well too – particularly blue, greens, corals and yellows.

    Pops of turquoise are always a good idea throughout the house to bring the tranquility of the calm ocean waters inside.

    Wood detail and rustic accents are perfect to complete the look, as are beach-themed stencils and decal.

    These stencils could be things like words that really spell out your love for the seaside, such as ‘life is better at the beach’, or images such as shells, anchors, starfish, beach huts, palm trees and so on. These can be used in any way you like, perhaps a large image that becomes the focus of the room, or smaller ones that are incorporated subtly across the space.

     Furniture:

    When you are choosing furniture to fit in with your beach theme, opt wicker as well as reclaimed and rustic wood.

    You could, of course, buy the base of your furniture and then customize it to your preferred beach look.

    For example, you could take a look at www.divancentre.co.uk to get the base for your bed and then you could create a headboard, perhaps in the shape and colour of waves, from reclaimed wood.

     The little things make a big difference:
    If you don’t have a view of the beach, seaside prints are the perfect alternative.
    You could even blow up your favourite photo from your own beach trip and perhaps have it put onto canvas.

    Fairy lights are a great addition to beach-themed rooms, as their sparkles will emulate the stars at night.

    Finish the look with nautical ornaments from boats to starfish. You could even swap everyday items – shells in place of door and cupboard handles, perhaps? This means the theme continues in the most unlikely of places.

     DIY:

    When you are on the beach, why not gather together some sand, shells, driftwood and stones and create your own, unique beach-inspired ornaments?

    You could put your smaller stones and/or sand into a glass jar – the perfect spot for a candle.

    Shells are great for decorating a range of household items, from mirrors to vases.

    Likewise, rope is always good to give a beachy effect, so use it for hanging things or, like the shells, it could line mirrors or wall art.

    The post Missing The Sun, Sea and Sand? Simple Tricks To Get that ‘Beach Look’ For Your Home appeared first on All Peers.

    Matthew Gertner - AllPeersSmart Renovations: Get A Better Price For Your Home With These Quick Fixes [Technorati links]

    August 31, 2016 06:22 PM

    The need for renovations arise every few years, but it renovating need not be just about maintenance. There’s a lot of ways to add value to your house by renovating smartly.

    In a recent interview, property renovation expert Cherie Barber shared her views on how homeowners can add value by renovating. “Focus on what’s visible,” says Cherie, “concentrate on the areas buyers love.”

    Home Repair after Flood

    By focusing on certain crucial parts of the house, you can generate a substantial return on your renovation investments. So, if you plan on getting the house redone in the near future, here are some quick fixes you should probably focus on:

    Focus on the Look and Feel

    Cherie recommends homeowners focus on the look and feel of the property to greatly boost its value. A fresh coat of paint or an updated look for the front entrance is likely to create an inviting atmosphere. A property that sets a good first impression is likely to fetch a much better price from a buyer. Buying a home is, after all, an emotional experience. Set the right mood and you’ll do wonders for the property’s value.

    Bathrooms

    Bathrooms are one of two essential parts of the house that can make or break the selling price (we’ll get to the other one in just a bit).

    Bathrooms need to be sparkling clean and updated with the best fixtures. Homeowners need to aim for luxurious and modern looking bathrooms. Focus on providing ample storage and lots of space. Small amenities like his and her vanities go a long way too.

    Kitchens

    Kitchens are arguably more important than bathrooms when it comes to selling a property. “It’s the engine of the whole house,” believes Cherie Barber. The quickest way to add value to your kitchen is to add in an island. An island bench can add space and create a hub for the entire family, which is really attractive to a homebuyer.

    Space

    The best way to get the most bang for your buck while renovating is to try and add space to the property. The cheapest way to create more space is to minimize the furniture and change the layout. However, if your budget allows for an added bedroom, you can boost your property’s value by $30,000 to $150,000 depending on where you live. Space is the single most sought after feature of a property and adding more space is never a bad investment.

    Essentials

    A fresh coat of paint and new light fittings are all essentials when you’re trying to sell a property. These renovations don’t cost a lot and are very likely to be noticed by homebuyers, which is what makes them so crucial. Go for energy efficient LED lights wherever possible (eco-friendly homes get a better price) and a lively color scheme throughout the house for best effects.

    Renovating is almost a necessity when you own property, but with a well thought out plan you can make the most of your investment and add value to your home. Take a smarter approach to renovations and you’ll fetch a better price for your property when it’s time to sell.

    The post Smart Renovations: Get A Better Price For Your Home With These Quick Fixes appeared first on All Peers.

    August 30, 2016

    Matthew Gertner - AllPeersNatural Ways To Replenish The Energy You Lost [Technorati links]

    August 30, 2016 05:02 PM

    Daily energy is really important for all of us. It is vital that your daily energy levels are as high as they need to be in order to perform the daily demands of the body, the family life and the work that you do. Jason Camper highlights that replenishing the energy that you lost is not at all something that is simple., It is a process that lasts much longer than what many think. You will need to be sure that you always do what it takes. Thankfully, there are so many natural ways available for those that want to replenish lost energy. They are going to be discussed in the following lines.

    Take A Fast Walk

    One of the easiest ways to replenish your energy sources is to take a pretty short work. If you walk around at your pace for just a quarter of an hour, you will end up with enough energy to last you for one hour and a half. It is something that is counter intuitive by many since you end up spending energy as you walk. However, after you try it you will realize the fact that this is something that helps out much more than what you initially thought.

    maxresdefault-1

    Meditation

    Just sit back, let the muscles rest, relax and make sure that the cells inside your body will be filled with that all important oxygen. When you are tense, the cells end up being starved for this important nutrient. That means that energy is not going to be produced in an ideal way. As you stay and meditate combined with deep breathing, the body will end up generating much more energy as it starts working as it should again.

    Start Writing What Bothers You

    This is quite an interesting trick that you should take into account. Stress and tension are normally the reasons why the mind ends up wondering and why you worry. Take a piece of paper and write down everything that bothers you or that creates stress. When you do this you instantly feel better and you will notice that energy levels go up. This is actually the precise way in which the asthma patients are enhancing lung function and how rheumatoid arthritis patients manage do deal with the pain. Writing what bothers you basically releases that tension that is stress-induced.

    Pay Close Attention To Hydration

    This is something that so few people know, although everyone will tell you that they know how important it is to remain hydrated. When you are dehydrated, the body ends up being more fatigued. All that is necessary to get rid of the fatigue and end up with an almost instant energy source is to drink water. You should always go for as much water as the body requires. Do not believe the 7 glasses per day rule or something similar that some will tell you. Whenever you feel thirsty, drink and your energy will be replenished. The great thing about this trick is that you can use it several times per day, whenever you feel a little thirsty.

    The post Natural Ways To Replenish The Energy You Lost appeared first on All Peers.

    Matthew Gertner - AllPeersTop Podcasts and Online Radio Shows on Wealth Management [Technorati links]

    August 30, 2016 03:34 PM

    Podcast and digital radio have come a long way with over a billion downloads and subscribers on Apple podcasts alone.

    Podcasts and digital radio is an excellent way to learn more about specific topics such as wealth management. They’re convenient to listen to while driving and easily accessible from your iPod, smart phone, and even your computer.

    10960938633_a0e7007b0f_b

    Here is a list of some of the best shows available for streaming right now.

    BiggerPockets Podcast

    Rated as the number one real estate podcast on iTunes, the BiggerPockets Podcast is hosted by Josh Dorkin and Brandon Turner who deliver interviews, and tips to their listeners each week for those looking to grow their real estate business.

    The show is popular because the advice, tips, and the information is full of real, practical advice. If you are thinking about starting a real estate career or want to brush up on the goings on in the real estate industry, this could be the podcast for you.

    Smart Money with Keith Springer

    Invest for need, not greed is the motto of this twice-weekly broadcast by financial advisor Keith Springer. Recent podcasts include 5 Secret Do’s and Don’ts that Drive Successful Investors, How the 2016 Tax Code Changes Will Affect You!, The Top 10 Secrets Retirees Don’t Tell You and an interesting podcast with Two Superstar Billion Dollar Money Managers. You can find the podcast on iTunes or listen live on Saturday at 1 pm and Sunday at 6 am each week.

    The Clark Howard Show

    A longstanding name in the world of personal finance Clark Howard is an expert on financial matters and a host of a podcast and radio show. The syndicated “Clark Howard Show” covers how to save money, spend less money and avoid the many consumer rip-offs. You can listen live and even call into Howard who is on the air every day Monday to Friday or you can listen to his podcasts at your convenience. This is a great show for getting straightforward advice on saving money and preparing for the future.

    Freakonomics Radio

    The Freakonomics Radio Show is an extension of the popular

    “Freakonomics,” and “SuperFreakonomics” books that were co-authored by journalist Stephen Dubner and economist Steven Levitt. An award-winning weekly podcast (with millions of downloads a month) Freakonomics Radio airs on public-radio stations across the country. On Freakonomics Radio, Dubner uncovers “the hidden side of everything” and he routinely covers topics ranging from racially profiling employees to how to win games and beat people. The podcast covers how to think creatively, rationally and productively, particularly about finances and other resources.

    The post Top Podcasts and Online Radio Shows on Wealth Management appeared first on All Peers.

    Matthew Gertner - AllPeersTips for How to Become an Engineer [Technorati links]

    August 30, 2016 02:18 PM

    The World of engineering can provide you with a fantastic career filled with innovation, design, job security and for the most part, an excellent salary. The basic requirements to be an engineer are that you have a creative mind and a strong understanding of maths and science, you should also have a passion for it, like anything, if you are not passionate in what you are doing then you are unlikely to be successful and there really is no point in doing it at all. If you meet the criteria and are considering engineering as a viable career path for you then here are some tips on how to get into the industry.

    Learn From Those Who Have Done It

    On your journey to become an engineer it is important that you allow yourself to be influenced by those in the industry. Successful people like Anura Leslie Perera for example can provide great inspiration, a man who has worked in many fields of engineering such as construction and ship building and who now owns a very successful aerospace engineering firm. Looking at how people like Anura have gone about their career can provide you with a great model to follow.

    Education Requirements

    When it comes to education it is important that you work hard at gaining strong results in maths and science, these are the cornerstones of engineering regardless of which sector you plan to go into. If you are looking towards going into computer engineering then naturally IT should also be studied at high school level. When it comes to colleges, unlike many fields of work, there isn’t as much emphasis on which college you attend when it comes to engineering jobs. Attending a college like MIT will increase your opportunities in the jobs market and help you to demand a higher salary but it is not a prerequisite.

    Helping Yourself

    As with many careers it really pays to put in your own work in away from the classroom, when it comes to engineering you should be a 24 hour student. Having side projects that center around your chosen field of engineering will help you to keep your mind focussed on engineering and improve your ability to see projects through from beginning to end. You should be trying to make friends and contacts within the industry, there is no harm in emailing a group of professionals asking for their help and advice. If you start building up a network early on it can pay great dividends in the future.

    Widen Your Abilities

    When it comes really succeeding in the engineering industry it takes more than just being a great engineer, it is also important that you have a wide variety of skills. These skills can be business acumen, leadership ability, interpersonal skills or knowledge of a wide variety of sectors, if you want to stand out when it comes to getting a job then it is vital that you have plenty of strings to your bow.

    The post Tips for How to Become an Engineer   appeared first on All Peers.