Outlook for iOS and Android potentially harmful

Posted on 15 April 2015 by Joseph Turner

This is the first of a series of posts on threats and risks we've found using Interlock.

A few days ago, we were walking a customer through the results of their Interlock Test Drive. While examining one of the Bad users, the customer interrupted the demonstration, saying "That's not right. That user shouldn't be in Oregon." We took a closer look at the identity in question, and found something troubling: the user's internal, Exchange email account was being logged into by an automated system hosted in AWS.

A bit of background. Interlock uses advanced Identity Analytics to evaluate the risk each identity poses to the organization based on their characteristics and activities over time. Identities that pose a large risk are flagged as Bad and bubbled up to the user interface to be evaluated (though they can also be mitigated automatically or generate out-of-band alerts). The identity we were looking at had been flagged as bad precisely because it looked like abnormal access: out of the user's and organization's normal operating locations, from a new IP assocated with a new location some distance away, with a new device type.

We ran a WHOIS search on the IP address of the access, and it was an AWS account.

NetRange:       XXX.XXX.0.0 - XXX.XXX.255.255
CIDR:           XXX.XXX.0.0/12
NetName:        AMAZON
NetHandle:      XXX-XXX-XXX-XXX-XXX
Parent:         XXXX (XXX-XXX-XXX-XXX-XXX)
NetType:        Direct Allocation
Organization:   Amazon Technologies Inc. (AT-88-Z)
RegDate:        2014-10-23
Updated:        2014-11-13
Ref:            http://whois.arin.net/rest/net/XXX-XXX-XXX-XXX-XXX

OrgName:        Amazon Technologies Inc.
OrgId:          AT-88-Z
Address:        410 Terry Ave N.
City:           Seattle
StateProv:      WA
PostalCode:     98109
Country:        US
RegDate:        2011-12-08
Updated:        2014-10-20
Comment:        All abuse reports MUST include:
Comment:        * src IP
Comment:        * dest IP (your IP)
Comment:        * dest port
Comment:        * Accurate date/timestamp and timezone of activity
Comment:        * Intensity/frequency (short log extracts)
Comment:        * Your contact details (phone and email) Without these
we will be unable to identify the correct owner of the IP address at
that point in time.
Ref:            http://whois.arin.net/rest/org/AT-88-Z

OrgAbuseHandle: AEA8-ARIN
OrgAbuseName:   Amazon EC2 Abuse
OrgAbusePhone:  +1-206-266-4064
OrgAbuseEmail:  ec2-abuse@amazon.com
OrgAbuseRef:    http://whois.arin.net/rest/poc/AEA8-ARIN

OrgNOCName:   Amazon AWS Network Operations
OrgNOCPhone:  +1-206-266-2187
OrgNOCEmail:  aes-noc@amazon.com
OrgNOCRef:    http://whois.arin.net/rest/poc/AANO1-ARIN

OrgTechHandle: ANO24-ARIN
OrgTechName:   Amazon EC2 Network Operations
OrgTechPhone:  +1-206-266-4064
OrgTechEmail:  aes-noc@amazon.com
OrgTechRef:    http://whois.arin.net/rest/poc/ANO24-ARIN

Not good. Somehow, a system in AWS had the identity's credentials and was logging into the organization's Exchange account. Digging in deeper, we looked at the claimed User Agent string of the activity and found that it was claiming to be Outlook-iOS-Android/1.0. A Google search turned up this article, indicating that the User Agent string is associated with the new (at least to Microsoft, more on that in a moment) Outlook for iOS and Android apps (or something masquerading as such). So why is it in AWS?

Rather than building the apps from scratch, Microsoft purchased Acompli, the company responsible for the app. In acquiring the company, they also acquired the legacy architecture, which currently resides on AWS (though surely they are furiously moving it over to their own infrastructure!). Reading the comments on the article above, we found other enterprise users who had noticed traffic from these apps and were concerned. This led us to the Acompli Privacy Policy, which includes the following lines:

We collect and process your email address and credentials to provide you the Service. 


We collect and process your email messages and associated content to provide you the Service. 

In other words, in order to provide the quality of service they wanted for Outlook for iOS and Android, they are caching both credentials and email data on their own servers.

For the casual, personal user of Office365, this is probably no big deal, but for enterprise customers, particularly ones constrained by compliance requirements, this may be unacceptable. Enterprise data and credentials are being copied and stored on third-party servers. This is a violation of the security policies of many (most?) organizations.

According to the comments by Microsoft personnel, these are valid concerns. Organizations who want to limit their exposure to this issue can use the ActiveSync Allow/Block/Quarantine list feature of Exchange 2010+ (or see this article for older versions) to limit or block the access of this app. Of course, this may still result in credentials being cached even if they are blocked by Exchange, so some user education is probably also in order.

If you're interested in finding potential threats like this automatically as they arise in the future, please contact us to take a test drive of Interlock.

A (Brief) Treatise on Risk

Posted on 08 March 2015 by Joseph Turner

My first week here at Mobile System 7 was a crazy one; we spent hours with dry-erase markers in our hands, going over drawings of architecture, conceptual drawings of data flows and event streams, a bit of equations, and lots of (probably crucial) stuff that never made it into my notebook. I knew that our goal was to help organizations identify and block Bad Guys, but I probably came out of that week thinking that the product was more about whiteboards than anything else.

Over the months my conceptual model of Interlock has distilled into a better approximation of its true essence: risk, as it applies to identities, both noticing risk and responding to risk. Risk is a challenging problem, seemingly impossible at first glance. Calculated risk is a measure of not only the likelihood that a given set of features is going to have a downside, but also a measure of the severity of that downside. It seems like there are too many degrees of freedom, that this set of equations has too many unknowns. How can we even approach this problem, much less do it reliably?

Indeed, it is a challenge to effectively estimate risk, and in a lot of ways it is both a people problem and a technology problem, but it is in no way impossible. In order to better understand the overall picture of risk, it is informative to first separate Interlock's view of risk into two separate concepts: unsubstantiated risk and intrinsic risk.

Unsubstantiated Risk

Unsubstantiated risk is the risk posed by activities with unsubstantiated parameters, such as activities with a new device or in a new location. The important aspect of the unsubstantiated parameters is that they represent a departure from the user's (or population's) normal operating parameters. Each activity that has unsubstantiated parameters potentially poses a threat because it is associated with a nonzero probability that the person taking the action is not the person they claim they are. Think about it: if you saw someone log into their email from their house, using the same MacBook they've used every day for a year, at time of day that they log in every day, it is almost certainly that person; if instead they suddenly logged in from Sudan using a new Windows PC at a time when they are normally asleep, the activities performed with those parameters are much more likely to be malicious. These Sudanese activities pose a large unsubstantiated risk.

It's worth saying that the typical approach to unsubstantiated risk is to build more structure and trust around authentication, to try and ensure that the person behind each activity is the person they claim to be. Unfortunately, we have seen time and again that dedicated attackers can and will gain access to desired resources despite expensive and complicated authentication measures such as MFA. As a result, conflating authentication and security creates a common blind spot for organizations, one that we talk about a lot: the identity gap. The identity gap results when authentication is the last line of defense, leaving each further action subject to no analysis. Monitoring unsubstantiated risk for each access goes a long way toward closing this security gap. Bad guys can no longer act with impunity, as every action they take may reveal them.

Intrinsic Risk

Intrinsic risk is the risk posed by the identity as a result of its characteristics, things like the total number of devices it uses or the number of groups it is in, as well as labelable characteristics like that the user is a frequent traveller. Intrinsic risk adds an element of risk to every activity associated with the given identity. Each identity with intrinsically risky characteristics potentially poses a threat because they have the capability to be very destructive to the organization. As a result, every action they take must be subject to higher scrutiny. Intrinsic risk helps prevent a number of potential issues, but it's particularly useful in identifying potential insider threats.

Because it adds risk to activities, intrinsic risk also magnifies any unsubstantiated risk the identity may be associated with. Because intrinsic risk implies that the identity has the power to be very destructive, the implications of a compromised account with high intrinsic risk are dire. For this reason, intrinsically risky identities with activities exhibiting unsubstantiated risk represent the largest threat to an organization.

Trusting Risk

The ultimate goal is not to have the most accurate estimate of risk for each identity, but to have the most accurate actionable estimate of risk. Each estimate can therefore be associated with a measure of reliability and a measure of actionability based on all of the information we have available to us. Each type of risk has its own methods for computing confidence, and has different ways of responding to the addition of new information.

By its very nature, intrinsic risk has a high degree of reliability, but by itself represents low actionability. Options for remediating intrinsic risk include destructive actions such as revoking permissions of the identity permanently, which lowers the intrinsic risk directly, or forcing the identities to constantly validate their actions. As most organizations have a spectrum of intrinsic risk across their identities, neither option is acceptable in general.

In contrast, unsubstantiated risk typically has a low degree of reliability at first blush, but a high degree of actionability. This is because unsubstantiated risk can be remediated by allowing the user to substantiate the parameters of their action. Substantiating actions in this way can happen automatically or manually. If a user logs into a service from their new phone, it is an unsubstantiated risk; if they continue to use the phone within otherwise normal parameters (e.g. logging in from their normal operating locations at their normal times of day using normal access patterns) that phone grows to be trusted automatically. Alternatively, assuming the presence of a trusted communication channel the user can be asked to verify the parameters directly - did you just use a new Android phone in Brazil?

Responding to Risk

Ultimately, how we address a given risk boils down to a combination of the above factors, the degree of each type of risk, the distribution of each type of risk in the population, and the policies specified by the user. Adaptive access control aims to respond to risk in the way that is both least intrusive and most effective.

Unsubstantiated risk

Addressing unsubstantiated risk by asking the user to verify the parameters of the activities directly offers a one-time process for responding to risk. If the parameters are indeed correct, the unsubstantiated risk is attenuated; if not, it is a direct confirmation of a security failure. We are currently developing a direct solution like this for risk remediation in Interlock. It promises a tremendous improvement to the focus and accuracy of our risk estimates with little or no expenditure of scarce security resources on the part of the organization.

Intrinsic risk

The correct response to intrinsic risk is usually taking no action at all but rather subjecting the identity to closer scrutiny. A direct approach to intrinsic risk is inappropriate. For a potential insider threat the user herself represents the threat and would be expected to respond negatively; for a sensitive user, substantiated activities are part of normal operating procedure.

In many ways, the biggest challenge is the final factor in responding to risk: user policies. What tools can we provide the user with to help them understand the identities in their organization, the scope of their risk, and the best way to address that risk? How can we leverage the functional differences between mitigation strategies to effectively and unobtrusively respond to risk? What granularities of mitigation make sense across organizations and within specific organizations? These are all questions we are continually striving to answer in Interlock. Ultimately, organizational risk mitigation is a human problem more than a technology problem. The best we can do is continue to work with organizations who are struggling with this burden now to find the best solution for everyone.

If you're an organization interested in finding the best solutions to addressing risk in your identities contact us about a free Test Drive of Interlock. If you're a developer (or data scientist, or UX designer!) who wants to help navigate risk across and among populations of thousands of identities with hundreds of millions of activities send me an email with who you are what you can add to our team.

Browserifying our Backbone app

Posted on 05 March 2015 by Chase Courington

A couple months ago we updated our front-end build process from using the built-in Rails asset pipeline to using Grunt with Browserify. In this post I'll quickly cover how we've updated our Backbone app to use Browserify for template pre-compilation and a more modular architecture.

Install and Config

We use grunt-browserify which gives us a simple integration with Grunt and our ui tasks. We need to install this module to our devDependencies:

$ npm install grunt-browserify --save-d

We then configure our browserify.js task that gets called in our gruntfile.js.

In the browserify.js we can define multiple options, one of which is transform. We also can pass in many different transform options but we only use one for our Handlebars template compilation.

options: {
    transform: ['hbsfy']

Define the files for Browserify to start compiling your app. Here we're looking at cn.home.js and then following the require() chain to create our output file, home.concat.js.

files: {
    '<%= paths.dist_js %>/dist/home.concat.js' : 'node_modules/app/cn.home.js',

Our UI Architecture

We utilize "controllers" in our Backbone app, which are Backbone views that load children and handle some basic event delegation. This helps us to build more modular widgets, partials, mixins and decorators; so our code is comprised of many smaller, interchangable modules.

This architecture also works well with Browserify. Where we were previously using grunt-contrib-concat on huge arrays of our modules we now simply require() direct dependencies ("modules") and Browserify takes care of the rest. An immediate benefit is with lower maintanence overhead, no longer getting load order bugs from our concat order.

In our example cn.home.js we require just the direct dependencies for this controller.


Controllers.Home = Backbone.View.extend({
    tpl: require('app/t.page.home.hbs'),

We do the same with other files that have modules they require, v.widget.date.range.inspection.js.


Views.Widgets.DateRangeInspection = Backbone.View.extend({
    tpl: require('app/t.widget.dateRangeInspection.hbs'),

Currently we assign our Controllers, Views, Models, etc. to globals, this is something that we carried over and will most likely be phasing out. So where in v.widget.date.range.inspection.js, line 61 we have this.childViews.popup = new Views.PopupWidgetContainer({ which refers to a global that is set in the require('app/v.popup.container.js'); on line 2. We'll be assigning that require() to a local variable, like var myWidgetContainer = require('app/v.popup.container.js'); and then line 61 would read this.childViews.popup = new myWidgetContainer({.


Browserify enters our controller and starts crawling back through require() statements and compiling all our modules into a single output file that will be loaded in the client.

There is one catch, our js architecture somewhat throws Browserify off, with all it's directories and subdirectories. We easily solve this with tasks in our grunt-contrib-copy and grunt-contrib-clean. Our js modules are copied to /node_modules/app giving Browserify a clear path to all our javascript modules. Once Browserify has compiled the output to <%= paths.dist_js %>/dist/ we clean /node_modules/app.

Now running $ grunt prep calls Browserify as part of our front-end build. It copies our modules from where they can be safely edited to where they'll get compiled. Browserify then crawls our files looking for all require() statements and compiles the javascript and Handlebars templates with the modules in /node_modules/app, once complete that directory is cleaned.

Running "copy:browserify" (copy) task
Copied 577 files

Running "browserify:dist" (browserify) task

Running "clean:browserify" (clean) task
>> 1 path cleaned.

Execution Time (2015-03-05 18:27:27 UTC)
browserify:dist        14.7s  ▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇▇ 33%


This workflow helps us keep our code small and organized as well as helping us keep from committing any compiled code. With this flow we only commit our modules and anything that is compiled gets ignored helping to avoid merge conflicts.

Refactoring a large Backbone app from relying on concatenation and globals for writing modules is tedious. If like us you have lots of unit tests add some time for getting those working properly, you may find that you're doing a lot of find/replace and copy/paste to get your tests passing with Browserify. In the end we've found the few days effort to refactor worth it.

This format and flow works for us. It has added some considerable time to the UI build on first pass, when we run $ grunt dev and are watching for changes we only browserify on newer files. Improving this build time is something we'll be addressing in the immediate future.

We are just scratching the surface in regards to the benefits of Browserify. But, so far, it has proven to be a very powerful and useful tool that we'll continue to explore. If you have any questions, comments, improvements feel free to reach out to @chasecourington. à votre santé


Automated UI Testing with Sauce Labs, The Intern and Grunt

Posted on 28 January 2015 by Chase Courington

Manually testing your UI across multiple browsers and platforms is time consuming. Your time should be spent writing code and building UI components, not running through scenarios in Chrome 39 on Mac OSX 10.10.1, and then again on Windows 8 and then testing IE! Let alone managing your own Selenium test farm…

Thankfully we have tools like Sauce Labs Connect and The Intern to help conduct our tests. I'll walk through how we've setup these two with Grunt to test the UI in our Rails app.

Setup Sauce Labs

To get started you'll need a Sauce Labs account. If you don't have one you can sign up for a free trial. Sauce Labs frees us from setup and management of test vms. As a bonus it's founded by the creator of Selenium, so it's safe to assume they know what they're doing.

Install Intern

Intern interacts with Sauce Labs' API and give us a framework to write functional (and unit) tests for Selenium. Intern gives us tons of flexibility and a stable backing with Site Pen the creators of Dojo. Install to your package.json development dependencies: $ npm install intern --save-d

Grunt and Intern

Getting Intern to work with Grunt is easy since support is baked into Intern. We created an Intern task and wired it into our gruntfile.js:


module.exports = function(grunt) {  
    grunt.config.set('intern', {
        runner: {
            options: {
                config: 'tests/intern/intern',
                runType: 'runner'


Once we create the intern task in our /tasks/ directory we can run that task from the ci with $ grunt intern. This will run our tests in a test runner. Our gruntfile.js might look something like:


module.exports = function (grunt) {

    // Project config

        // read grunt tasks from npm
        pkg: grunt.file.readJSON('package.json'),


    // load grunt plugins from directory

    // package up product for production
        'Prepare project assets',
        ['clean', 'bowercopy', 'jshint', 'modernizr', 'browserify', 'less', 'cssmin', 'concat', 'uglify', 'copy', 'imagemin', 'jasmine', 'intern', 'clean:prod']

    grunt.registerTask('default', ['prod']);


We run $ grunt intern in a "prod" task. We still use Jasmine for our unit tests and for development (for now). We run UI tests when we build for a release since the the UI tests cost more (time and money).

Configure Intern

The Intern docs are pretty great and help us get going with configuration. Areas of interest to get going are:

  • capabilities namely the screen-resolution
  • environments
  • tunnel && tunnelOptions You'll need to get a Sauce Labs access key from your account panel on the web.
  • functionSuites This is where we point the AMD loader to our tests


    capabilities: {
        'selenium-version': '2.41.0',
        "screen-resolution": "1280x1024"

    environments: [
        { browserName: 'internet explorer', version: '11', platform: 'Windows 8.1' },
        { browserName: 'internet explorer', version: '10', platform: 'Windows 8' },
        { browserName: 'internet explorer', version: '9', platform: 'Windows 7' },
        { browserName: 'firefox', version: '29', platform: [ 'Windows 7' ] },
        { browserName: 'chrome', version: '39', platform: [ 'Windows 7' ] },
        { browserName: 'safari', version: '6', platform: 'OS X 10.8' },
        { browserName: 'safari', version: '7', platform: 'OS X 10.9' }

    tunnel: 'SauceLabsTunnel',
    tunnelOptions: {
        username: 'sauceLabsUserName',
        accessKey: 'sauceLabsAccessKey'

    functionalSuites: ['tests/intern/functional/login', 'tests/intern/functional/user-subjects'],

Write a Functional Test

Writing functional tests with The Intern is extremely simple. Intern comes with Chai which is a great assertion library or you can specify a different one. Leadfoot has a very nice API for navigating around your remote browser with some great documentation.

Here's a gist of what our test looks like login.js.

We're simply testing some routing for unauthorized users, user sign in, launching a modal, launching a popup inside a modal, and finally navigating to a different page inside our app.

Run the Tests

This is now as easy as $ grunt intern.

We get ci output giving you status and reporting test results back to you. You can see from the image we have 1 failure.

We even see in your Sauce Labs browser ui that you have tests queuing up.

We see our UI is failing on Windows 7 in IE 9.

We can go and step through the tests and view screenshots that are automatically captured along the way. (Windows 8, IE 10)


Setting appropriate timeouts between actions is imperative. Sometimes tests can fail because you're trying to assert something before it has had time to take place. I've set a default timeout to provide plenty of time for things to happen across the network and in the different vms.

Initially I encountered some issues testing our Rails app, running at port 3000, via the proxy. While Intern and Sauce Labs start you at a port you specificy in your config (:9000 by default). You can use Leadfoot to navigate the remote browser to your local app server (localhost:3000) and then start your tests from there. This adds some time and complexity because you'll need to log in to your app on the remote browser repeatedly for different suites to run if they need to be behind authentication. A little bit of a hack, maybe there's a better way to do this, but I found this to work.

Running functional tests can get costly in time and money. We run Jasmine unit tests as part of our Grunt development process with grunt-contrib-watch and then run a whole functional test suite with Intern and Sauce Labs pre-release. We think this process works pretty well.

The Intern is fairly powerful in that we can also write our unit tests with it. Since we've already got some 300+ Jasmine unit tests we're not jumping at the opportunity to refactor those into the Intern. However we may move that direction eventually to simplify our development and testing process.

For questions, comments, suggestions, etc. please reach out. @chasecourington


Adaptive Access Control and Interlock

Posted on 21 January 2015 by Joseph Turner

In an earlier post I talked about Identity Analytics, outlining what they are, why they are a hard problem, and our approach in Interlock. And while Identity Analytics are useful on their own - imagine an alert system that offered security officers a low false positive mechanism for investigating potentially compromised identities - their real value emerges when coupled with the other piece of Interlock: Adaptive Access Control. Combined, these subsystems provide a powerful framework for not only understanding risk but reacting to it in real time. This framework lowers the cost of administration, reduces user frustration, and most importantly automatically addresses risk as soon as it arises.

So what is Adaptive Access Control (AAC)? In this post, I'll explain what AAC is in general and why it provides benefits that organization need. Afterwards, I'll go over the Interlock approach to AAC in increasing detail, from general concept, down to policies and exceptions, and finally mitigation steps against individual services.

What is Adaptive Access Control?

Adaptive Access Control is a way for organizations to differentially control access to resources and services in a way that incorporates a current view of risk. More simply put, it changes the way a given user can access resources when the perceived level of risk for that user changes. This context-sensitive approach to security is both flexible and powerful. It lowers the cost of administration by reducing IT support load and manual security review. It reduces user frustration by lowering the number of hoops a user must jump through to access resources in the average case. In the case where there is identified risk, though, it can provide even greater hurdles for an attacker to access sensitive services and data. It can also leverage existing mitigation techniques.

As an example, consider a system where users must use two-factor authentication each time they log in. This is costly both in user time and frustration and in IT support for the system. It also fails to catch every attacker, because if the attacker can control the two-factor device they can still access the system. In contrast, with AAC users with a normal level of risk authenticate normally. Users that exhibit higher than an acceptable level of risk may be directed through two-factor auth. Users with a yet higher risk can be denied access to the system altogether, blocking an attacker even if they had compromised the two-factor device. This type of AAC is specifically called adaptive authentication, as it alters the way users authenticate based on risk. As we will see, there are other, finer-grained forms of AAC as well.

AAC in Interlock

At the risk of stating the obvious, for AAC to work effectively it needs a reliable measure of risk for individual identities. This is why the Interlock Identity Analytics engine feeds the Adaptive Access Control component. As a quick refresher, let's review the functional diagram from the last post. Because there is a bit more input that is fundamentally relevant, it has been updated to show more detail.

The AAC component is driven primarily by three inputs: the output of the Identity Analytics, which is simply the risk level of the identities under management; the available mitigation mechanisms exposed by the services under management; and the policies specified by the user. In most cases the latter two inputs change infrequently if at all. If we take them as fixed, Adaptive Access Control is simply performing actions or side effects on the services under management in response to risk. What sort of side effects?

  • Notifying someone
  • Changing user permissions
  • Triggering two-factor authentication
  • Deprovisioning
  • Service-specific measures, such as limiting permissions for specific resources

How does Interlock know which action to take for a given risk level change? That's where policies come in.


Policies are the way the user specifies which mitigation strategies apply to which identities. At the most basic, they define a mitigation strategy for a given risk level per service. For instance, the policies for Okta might be:

  • Bad users → Move to group Bad in Okta
  • Suspect users → Move to group Suspect in Okta
  • Good users → Do nothing

For each such policy, the defined strategy is applied for the given service when an identity moves into the given risk level. Each mitigation strategy also includes an undo mechanism that is applied when the identity moves out of that risk level.

For example, with the policies above when an identity transitions from Good to Bad, they will be moved into the group Bad. When our Bad identity transitions down to Suspect, they will be removed from group Bad and added to group Suspect. Finally, when it transitions back to Good, they will be removed from group Suspect. Of course, not all mitigation strategies are completely reversible. For example, reprovisioning a user in Okta requires action on the part of the user. The user is free to choose fully reversible mitigation strategies if the desire, but in general this is why the emphasis is on side effects instead of state.

Though the examples above focus on a single mitigation strategy, policies may specify a group of mitigation strategies. This simply means that multiple actions are taken when the appropriate transition occurs. In fact, you can think of the per-service policies as combined into a single set of mitigation strategies scoped to the service. Taking multiple mitigation strategies is particularly relevant in the context of policy exceptions.

Policy exceptions

The simple policies described above assume that the sensitivity and cost associated with mitigation is the same for all users. In most organizations, this is not the case. For example, some users have access to more important data than others. These users have higher sensitivity and therefore may need to be mitigated more aggressively when exhibiting risky behavior. On the other hand, executives at a company may require access to important information at all times. They have a higher cost of mitigation that the average user, because mitigating them aggressively will result in unacceptable loss of access. These users may therefore need to be mitigated less aggressively than normal users.

To specify these types of exceptions, Interlock has a mechanism for defining policy exceptions for each service. These policies can be generalized as tuples of (activity filter, risk level, mitigation strategy). Activity filters are restrictions on the activity to which the mitigation strategy should apply. The most general activity filters refer to identities, such as identities within a certain group or a specific identity. In the context of this abstraction, a set of policies might look like this:

  • (Users in group CxO, Bad, Notify security team)
  • (Users in group Admin, Bad, Deprovision)
  • (All users, Bad, Move to group Bad)
  • (All users, Suspect, Move to group Suspect)
  • (All users, Good, Do nothing)

The bottom three tuples of course represent the policies from the previous section. So how do these get resolved? What happens if the CEO is also an admin?

Policy resolution

The only way that policy resolution makes sense with arbitrary mitigation strategies is to apply them in a specified order. Along with the tools for creating policies, Interlock also provides a way to order the policies by preference of application. Policies are then evaluated in the order given, with evaluation stopping at the first applicable policy.

For the set of policies given above, this would mean that when a CEO who was also an admin transitioned to Bad Interlock's AAC would only notify the security team and not deprovision the CEO's account. The desirability of this sort of exception should be apparent.

Policy compilation

The underlying mechanism of policy resolution in Interlock is a series of compilation steps that results in a set of rules reflecting both the user-defined policies as discussed above, as well as the current risk level and state of the overall system. These rules are tuples of the form (user id, service, mitigation strategy). This architecture is a result of the real-time nature of Interlock, in which we want to push actions as close to the service as possible (more on that in a bit), and the fact that the mitigation strategies in most cases represent side effects and not state as discussed above.

The process of rule compilation is performed whenever there is a change in policies, risk level, or state, where state refers to any characteristic of activities that can be used as a filter in the policies. An example of a triggering state change would be a user being added to the group Admin. As a result of that change, the policies above would need to be recompiled so that the rules would reflect the correct mitigation for that user.


As discussed, the output of the compilation phase is a set of rules that specify actions for each identity. The final phase of AAC is to perform the actions specified in the rules. Frequently, the way the actions are taken may differ from service to service, and the mechanism also may differ by action taken. For example, to move a user into a group in Okta, an API request is emitted. If instead we want to deny access to a specific SharePoint resource, the set of rules is published to the SharePoint agent which can then deny individual request for the resources.

In general, there are a set of adapters for each service that understand the requirements for mitigation against those services. These adapters take the required action to perform the mitigations requested in the rule set.


Adaptive Access Control offers a lot of flexibility and power to organizations looking to protect sensitive services and resources. Though the basic concept is simple, a surprising amount of sophistication is required to ensure that the behavior of the system can adapt to the unique needs of each organization. I've tried to capture the main ideas and how we address them in this article, but there are a number of engineering challenges that arise in areas like efficient policy compilation and specific service integrations that are beyond what could be covered here.

One ongoing question for us is how to build interfaces that allow users to intuitively and efficiently define their policies and ordering, especially as the number of potential filters, services, and mitigation options grows. We are actively working on a number of designs to improve our current implementation in this area. Development in this area, along with adding new services and mitigation options, will continue to improve our approach to Adaptive Access Control in the future.

Have questions or comments? Hit me up on Twitter. Interested in changing the face of security with us? Send me your resume.

Copyright © 2014 Mobile System 7 - www.mobilesystem7.com