AWS Security Configuration Scanner

Large enterprises tend to invest into CSPM systems (Cloud Security Posture Management) like Dome9, PrismaCloud, or Orca Security. For smaller companies, it may be cost prohibitive to invest in a CSPM, so they tend to simply do nothing , and hope they don’t have any breaches. This is a dangerous place to be in.

Let’s assume you do look at tools like Trusted Advisor once in a while. It will show you where some of the big ticket items are that you need to look at, but it doesn’t go into a lot of detail. That’s where the AWS Security Info Configuration Scanner comes in. The AWS Security scanner is a Python project I’ve been working on for the past 2 years, and it is finally ready for release.

As the title implies, it’s a tool you can use to scan the configuration of your AWS account. It has a number of built-in security controls that will give you an overview of where the security issues in your AWS account could be. At a high level, the AWS CIS Foundations Benchmark was used as the basis for the majority of security controls.

I can already hear some of you saying : Why should I use this script, when I can simply use Security Hub? And you’d be right — you could use Security Hub (and in fact, I highly recommend it!). The big difference is that with Security Hub, you’ll have Config rules setup, and Config will incur additional charges. This is not necessarily a bad thing. The problem however, is that Security Hub will keep generating alerts, and unless you’re actively monitoring them, the alerts will simply go into a blackhole, never to be seen again.

Why use this script then? I view it more like an audit tool. It has the ability to generate a point-in-time snapshot of what the security configuration of your AWS looks like, and the output can then be used by auditors to discuss and challenge the findings with the various cloud security architecture teams.

How to use it

I would recommend that you run the script from the us-east-1 region. Since this is the central region for AWS (where all the IAM function live), most of the API calls will occur against this region, so it’s recommended that you either use the Shell, or a Spot instance in that region to run the script.

Shell

Fire up the CloudShell in the us-east-1 region. Assuming you have ReadOnly access to the AWS account, simply execute the following lines of code

git clone http://github.com/massyn/aws-security
pip3 install boto3 --upgrade
python3 aws-security/scanner/scanner.py --json output.json --html output.html

That’s it! The script should start running. Depending on the size of your environment, it may take about 30 minutes to run, maybe more. When it’s done, you can send the output.json and output.html files to an S3 bucket.

Spot instance

This is a work in progress. I have been successful in running a spot instance to execute the script. I am busy packaging the solution, and will update this blog post once it is ready. Essentially, you need to :

  • Create an EC2 IAM Instance that has read-only access to the entire AWS account, and write access to a specific S3 bucket.
  • Spin up an EC2 Spot instance with a public IP address, attach the EC2 IAM instance to the spot instance, and run the same commands as mentioned above.
  • When the script is done, copy the generated files to an S3 bucket, and destroy the EC2 instance.

Operation

The script connects to AWS using the default credentials, and starts to interrogate each of the services to retrieve the data. This is where the json files comes in. When it’s done with the data extraction, you’ll have a single json file that contains (most) of the system configuration that has been defined on your AWS account. This has huge implications. If you’re interested in digging through the config, you will be able to generate your own jmespath queries to retrieve anything your heart desires.

Once the json file has been created, the policy parser kicks in. It will read through the json file, looking for the logic that has been predefined in the script, and then generating a report (in HTML format) of all the findings.

Hidden features

When specifying the output file names (--json, --html), you can specify %a (for the accountId) or %d for the date. This allows you to have a batch file or a shell script you can run against a number of accounts, and it will keep a file per account, per date.

You can also request the cloud team to run the json extract for you. Once you have the json file, you can parse the output yourself, using the --nocollect function. This will simply skip the data ingestion function, and read the provided –json file, and parse the security rules.

Did you know you can specify an S3 path for the html or json files? That’s right! You can store the HTML file directly to S3!

It supports Slack! That’s right. You can specify the --slack option with your webhook ID to get the report straight in your Slack feed.

Known issues

  • The script does support the ability to run AssumeRole and connect to another account that have permissions. The issue however is that the provided credentials are only valid for 1 hour. If your data collection will take more than an hour to run, the script will start failing. As a workaround, the json file is constantly updated, so simply restarting the process will allow the script to continue where it left off, and complete.
  • Not all use-cases could be tested. There is a chance that some data collection or policy parsing would fail, simply because I wasn’t able to test it. For example, I do not have access to Direct Connect on my lab system (and I’m really not going to request a dedicated leased line just for that), so there is a possibility that you may have some failures as a result of that. If so, simply open a case on the Github Issue log, and let me know about the issue, so I can resolve it.
  • The script takes too long to run as a Lambda function.
  • Data ingestions for cloudfront – list_functions may fail. If so, update your boto3 python library.

What’s next?

This is where you come in. The main driver for this project is to give something back to the AWS community, to make AWS a more secure environment for its customers. Some of the things I’d like to still do are:

  • Fix the Lambda function. This will require decoupling the script, and let multiple Lambda functions run to collect the data from multiple regions, and possibly storing the data in Dynamodb.
  • Add more policies. Do you have some ideas? Log them in the Github Issue log.
  • Add multi-threading. When connecting to individual regions, do it in a multi-threaded manner so that we can speed up the execution of the data collector.
  • Build a web Frontend. I am playing with the idea of turning the script into a full-blown CSPM solution.
  • Improve the policy engine. I’ve started to convert the policies into jmespath queries. This is going to take a while. Any new policies (if feasible) will be added into a new policy configuration file.

Community Support

The project is hosted in GitHub, and being in GitHub means that you can fork your own copy of the code, and adjust it. All I ask is that you give credit, and that you contribute to the overall project with source code suggestions, or new policies you’d like to see.

https://github.com/massyn/aws-security

Leave a Reply

Please log in using one of these methods to post your comment:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s