Home » Collecting

Category Archives: Collecting

CommunityHoneyNetwork version 1.7 released!

We’re very excited to announce the release of version 1.7 of CommunityHoneyNetwork! In addition to the usual bug fixes (updating dependencies, etc.), there are some great new features available in 1.7, such as:

  • Honeypot tagging: Add a tag to your honeypot configuration that will show up in all logs. This allows some flexibility to add metadata about the collecting sensor directly in the log data.
  • Custom ACME server support with Certbot: If you use a certificate provisioning service other than LetsEncrypt that supports the ACME protocol, you can have CHN-Server talk directly to it!
  • Cowrie “Personalities”: Alter the SSH version, filesystem layout, output from commands, etc. using “personalities”. These are folders with bundles of cowrie configs that can be referenced in the sysconfig file to change the “look” of your cowrie honeypot, making it more difficult to identify.
  • Dionaea bistreams log rotation: Given the proliferation of WannaCry on the open internet, we found that rotating the Dionaea bistreams logs to be critical in preventing near-daily filling up of disk space.

For an administrative perspective, we added features to delete events from the database when a sensor is deleted, removed a number of unsupported dependencies, and added lots of documentation on topics such as integrating honeypot services with systemd, and using the CIF client to pull feeds of data.

We’re also making a major change in the way we reference images in the docker-compose.yml and deployment scripts: rather than referencing ‘latest’, we’re specifying a specific version that matches with the server version. So for instance, with CHN-Server version 1.7, the deployment scripts will reference version 1.7 tagged images directly. This will allow for a number of useful things from our side, and ensure that users don’t inadvertently find themselves with mis-matched server and honeypot versions, or upgrading accidentally.

As with all upgrades in this project, we highly recommend that you take a fresh look at the documentation. As new features are added, there are often new options for sysconfig files that can impact the way your CHN instance functions. Once you’re comfortable with the changes, things should be as easy as a “docker pull” and “docker-compose down && docker-compose up -d” away! For the more risk-adverse/VM-rich, given the ease of deployment you can always spin up a new instance, fresh, and simply migrate old sensors to the new server via deployment scripts.

As always, feel free to reach out to us via the Github project (now with a Gitter IM room)!

Cheers,

Jesse

CHN v1.5 released

We’re pleased to announce the release of version 1.5 of CHN. This new version includes a number of bugfixes, new features, and a new home for documentation.

In order to support versioned documentation, we’ve moved all our documentation to https://communityhoneynetwork.readthedocs.io/. This enables us to build documentation for future releases as we go, and you to get a sneak peek at future features!

The major new feature is the inclusion of deployment scripts for honeypots in the CHN interface. Take a look at the instructions here to see how easy this has become! As a teaser, here’s a screenshot:

Scripted deploy using CHN server
Deploying Cowrie

We hope today is a good day for you to try out CHN, and give us feedback!

— Jesse

 

 

A Quick Adventure in AWS

The Friday before Labor Day, I went through the exercise of setting up a new CHN instance; the server on a local VCL-like Ubunutu 18 image, and cowrie and dionaea honeypots in each of three EC2 regions (Sydney, Singapore, Sao Paulo), and one cowrie honeypot in the same VCL IP space, for a total of 7 honeypots. All told, this took about 45 minutes (no automation on the EC2 setup, just creating a t2.nano image with minimal specs). Most of this time was spent fumbling through EC2 setup and setting up some simple bash scripts to copy/paste for the host setup. Actual time spent setting up each honeypot was probably closer to 2 minutes (mostly time spent pulling images over the network).

The following Tuesday I pulled some stats to get a sense of what those honeypots saw over the holiday weekend.

$ curl -s “http://${CHN_SERVER}/api/intel_feed/?api_key=${API_KEY}&hours_ago=96&limit=10000” |jq ‘.meta’
{
“size”: 3070,
“query”: “intel_feed”,
“options”: {
“limit”: “10000”,
“hours_ago”: “96”
}
}

This command is hitting the CHN server API endpoint to get a list of all honeypot hits for the last 96 hours. That’s not a bad haul of malicious IP’s of 96 hours. But how applicable is that data to OUR network?

First I examined incoming flow records, and found that we had inbound connections from 1055 of the original CHN IP’s in that time frame, or put another way, of all the IP’s we collected in that 96 hour period, we found that 34% of them visited our network in that same time frame. Next I compared the CHN IP’s against our threat intelligence sources for that same time period (so any locally generated threat intel from honeypots, network flow detections, host log reports, etc.) and found that in the same time frame our existing mechanisms detected 340 of these IP addresses (11% over all CHN IP’s, 32% of the locally active CHN IP’s).

In other words, our internal honeypots, threat intelligence feeds, and all other detection methods identified only 32% of the attacking IP addresses identified with honeypots distributed across multiple networks. I think this speaks powerfully to the case for operating and sharing honeypot data across numerous networks.

I’d also like to point out that our process for honeypot builds is now much easier (and more reliable). Our documentation now defaults to using images, built by us, hosted on Docker Hub, which eliminates many of the issues we saw in building the images locally. Building locally gives you a lot of flexibility to integrate into your central management as you wish, but using pre-built images GREATLY speeds spin-up time.

If you’ve been putting off trying out CHN, please set aside a couple hours on your calendar to try it out. If you ARE using the system today, it would be great if you could share your stories with us.

— Jesse

Join private STINGAR mailing list

Interested parties are encouraged to interact with the team via the project Github pages or in the Gitter IM community, which gives us a public space for quick questions.

Academic institutions can email Alex Merck at team-stingar@duke.edu to be added to the private STINGAR mailing list and Slack workspace.

Please include information about your organization’s interest in the STINGAR project in your request.