We are proud to announce the release of T-Pot 24.04! T-Pot 24.04 marks probably the largest change in the history of the project. While most of the changes have been made to the underlying platform some changes will be standing out in particular - a T-Pot ISO image will no longer be provided with the benefit that T-Pot will now run on multiple Linux distributions (Alma Linux, Debian, Fedora, OpenSuse, Raspbian, Rocky Linux, Ubuntu), Raspberry Pi (optimized) and macOS / Windows (limited).
T-Pot is the all in one, optionally distributed, multiarch (amd64, arm64) honeypot plattform, supporting 20+ honeypots and countless visualization options using the Elastic Stack, animated live attack maps and lots of security tools to further improve the deception experience.
ssh
required)curl
: $ sudo [apt, dnf, zypper] install curl
if not installed already$HOME
:
env bash -c "$(curl -sL https://github.com/telekom-security/tpotce/raw/master/install.sh)"
~/tpotce/docker-compose.yml
) by removing the ewsposter
section. But in this case sharing really is caring!
T-Pot’s main components have been moved into the tpotinit
Docker image allowing T-Pot to now support multiple Linux distributions, even macOS and Windows (although both limited to the feature set of Docker Desktop). T-Pot uses docker and docker compose to reach its goal of running as many honeypots and tools as possible simultaneously and thus utilizing the host’s hardware to its maximum.
T-Pot offers docker images for the following honeypots …
… alongside the following tools …
… to give you the best out-of-the-box experience possible and an easy-to-use multi-honeypot system.
The source code and configuration files are fully stored in the T-Pot GitHub repository. The docker images are built and preconfigured for the T-Pot environment.
The individual Dockerfiles and configurations are located in the docker folder.
T-Pot offers a number of services which are basically divided into five groups:
docker-compose.yml
.During the installation and during the usage of T-Pot there are two different types of accounts you will be working with. Make sure you know the differences of the different account types, since it is by far the most common reason for authentication errors.
Service | Account Type | Username / Group | Description |
---|---|---|---|
SSH | OS | <OS_USERNAME> |
The user you chose during the installation of the OS. |
Nginx | BasicAuth | <WEB_USER> |
<web_user> you chose during the installation of T-Pot. |
CyberChef | BasicAuth | <WEB_USER> |
<web_user> you chose during the installation of T-Pot. |
Elasticvue | BasicAuth | <WEB_USER> |
<web_user> you chose during the installation of T-Pot. |
Geoip Attack Map | BasicAuth | <WEB_USER> |
<web_user> you chose during the installation of T-Pot. |
Spiderfoot | BasicAuth | <WEB_USER> |
<web_user> you chose during the installation of T-Pot. |
T-Pot | OS | tpot |
tpot this user / group is always reserved by the T-Pot services. |
T-Pot Logs | BasicAuth | <LS_WEB_USER> |
LS_WEB_USER are automatically managed. |
Depending on the supported Linux distro images, hive / sensor, installing on real hardware, in a virtual machine or other environments there are different kind of requirements to be met regarding OS, RAM, storage and network for a successful installation of T-Pot (you can always adjust ~/tpotce/docker-compose.yml
and ~/tpotce/.env
to your needs to overcome these requirements).
T-Pot Type | RAM | Storage | Description |
---|---|---|---|
Hive | 16GB | 256GB SSD | As a rule of thumb, the more sensors & data, the more RAM and storage is needed. |
Sensor | 8GB | 128GB SSD | Since honeypot logs are persisted (~/tpotce/data) for 30 days, storage depends on attack volume. |
T-Pot does require …
If you need proxy support or otherwise non-standard features, you should check the docs of the supported Linux distro images and / or the Docker documentation.
All of the supported Linux distro images will run in a VM which means T-Pot will just run fine. The following were tested / reported to work:
Some configuration / setup hints:
Display
to Console Only
during initial installation of the OS and afterwards back to Full Graphics
.T-Pot is only limited by the hardware support of the supported Linux distro images. It is recommended to check the HCL (hardware compatibility list) and test the supported distros with T-Pot before investing in dedicated hardware.
T-Pot is tested on and known to run on …
Some users report working installations on other clouds and hosters, i.e. Azure and GCP. Hardware requirements may be different. If you are unsure you should research issues and discussions and run some functional tests. With T-Pot 24.04.0 and forward we made sure to remove settings that were known to interfere with cloud based installations.
Besides the ports generally needed by the OS, i.e. obtaining a DHCP lease, DNS, etc. T-Pot will require the following ports for incoming / outgoing connections. Review the T-Pot Architecture for a visual representation. Also some ports will show up as duplicates, which is fine since used in different editions.
Port | Protocol | Direction | Description |
---|---|---|---|
80, 443 | tcp | outgoing | T-Pot Management: Install, Updates, Logs (i.e. OS, GitHub, DockerHub, Sicherheitstacho, etc. |
64294 | tcp | incoming | T-Pot Management: Sensor data transmission to hive (through NGINX reverse proxy) to 127.0.0.1:64305 |
64295 | tcp | incoming | T-Pot Management: Access to SSH |
64297 | tcp | incoming | T-Pot Management Access to NGINX reverse proxy |
5555 | tcp | incoming | Honeypot: ADBHoney |
5000 | udp | incoming | Honeypot: CiscoASA |
8443 | tcp | incoming | Honeypot: CiscoASA |
443 | tcp | incoming | Honeypot: CitrixHoneypot |
80, 102, 502, 1025, 2404, 10001, 44818, 47808, 50100 | tcp | incoming | Honeypot: Conpot |
161, 623 | udp | incoming | Honeypot: Conpot |
22, 23 | tcp | incoming | Honeypot: Cowrie |
19, 53, 123, 1900 | udp | incoming | Honeypot: Ddospot |
11112 | tcp | incoming | Honeypot: Dicompot |
21, 42, 135, 443, 445, 1433, 1723, 1883, 3306, 8081 | tcp | incoming | Honeypot: Dionaea |
69 | udp | incoming | Honeypot: Dionaea |
9200 | tcp | incoming | Honeypot: Elasticpot |
22 | tcp | incoming | Honeypot: Endlessh |
21, 22, 23, 25, 80, 110, 143, 443, 993, 995, 1080, 5432, 5900 | tcp | incoming | Honeypot: Heralding |
21, 22, 23, 25, 80, 110, 143, 389, 443, 445, 631, 1080, 1433, 1521, 3306, 3389, 5060, 5432, 5900, 6379, 6667, 8080, 9100, 9200, 11211 | tcp | incoming | Honeypot: qHoneypots |
53, 123, 161, 5060 | udp | incoming | Honeypot: qHoneypots |
631 | tcp | incoming | Honeypot: IPPHoney |
80, 443, 8080, 9200, 25565 | tcp | incoming | Honeypot: Log4Pot |
25 | tcp | incoming | Honeypot: Mailoney |
2575 | tcp | incoming | Honeypot: Medpot |
6379 | tcp | incoming | Honeypot: Redishoneypot |
5060 | tcp/udp | incoming | Honeypot: SentryPeer |
80 | tcp | incoming | Honeypot: Snare (Tanner) |
8090 | tcp | incoming | Honeypot: Wordpot |
Ports and availability of SaaS services may vary based on your geographical location.
For some honeypots to reach full functionality (i.e. Cowrie or Log4Pot) outgoing connections are necessary as well, in order for them to download the attacker’s malware. Please see the individual honeypot’s documentation to learn more by following the links to their repositories.
It is recommended to get yourself familiar with how T-Pot and the honeypots work before you start exposing towards the internet. For a quickstart run a T-Pot installation in a virtual machine.
Once you are familiar with how things work you should choose a network you suspect intruders in or from (i.e. the internet). Otherwise T-Pot will most likely not capture any attacks (unless you want to prove a point)! For starters it is recommended to put T-Pot in an unfiltered zone, where all TCP and UDP traffic is forwarded to T-Pot’s network interface. To avoid probing for T-Pot’s management ports you should put T-Pot behind a firewall and forward all TCP / UDP traffic in the port range of 1-64000 to T-Pot while allowing access to ports > 64000 only from trusted IPs and / or only expose the ports relevant to your use-case. If you wish to catch malware traffic on unknown ports you should not limit the ports you forward since glutton and honeytrap dynamically bind any TCP port that is not occupied by other honeypot daemons and thus give you a better representation of the risks your setup is exposed to.
Download one of the supported Linux distro images, follow the TL;DR instructions or git clone
the T-Pot repository and run the installer ~/tpotce/install.sh
. Running T-Pot on top of a running and supported Linux system is possible, but a clean installation is recommended to avoid port conflicts with running services. The T-Pot installer will require direct access to the internet as described here.
Choose a supported distro of your choice. It is recommended to use the minimum / netiso installers linked below and only install a minimalistic set of packages. SSH is mandatory or you will not be able to connect to the machine remotely.
| Distribution Name | arm64 | |:—————————————————————–|:—————————————————————————————————————————————————-| | Raspberry Pi OS (64Bit, Lite) | download |
$ git clone https://github.com/telekom-security/tpotce
or follow the TL;DR and skip this section.$ cd tpotce
$ ./install.sh
:
tcp/64295
sudo
)dps
and dpsw
aliases (grc docker ps -a
, watch -c "grc --colour=on docker ps -a
)la
, ll
and ls
aliases (for exa
, a improved ls
command)mi
(for micro
, a great alternative to vi
and / or nano
)tpot.service
to /etc/systemd/system
so T-Pot can automatically start and stopsudo
or root
) password at least once$ sudo reboot
Sometimes it is just nice if you can spin up a T-Pot instance on macOS or Windows, i.e. for development, testing or just the fun of it. As Docker Desktop is rather limited not all honeypot types or T-Pot features are supported. Also remember, by default the macOS and Windows firewall are blocking access from remote, so testing is limited to the host. For production it is recommended to run T-Pot on Linux.
To get things up and running just follow these steps:
git clone https://github.com/telekom-security/tpotce
cd ~/tpotce
cp compose/mac_win.yml ./docker-compose.yml
WEB_USER
by running ~/tpotce/genuser.sh
.env
file by changing TPOT_OSTYPE=linux
to either mac
or win
:
# OSType (linux, mac, win)
# Most docker features are available on linux
TPOT_OSTYPE=mac
docker compose up
or docker compose up -d
if you want T-Pot to run in the background.CTRL-C
(it if was running in the foreground) and / or docker compose down -v
to stop T-Pot entirely.With T-Pot Standard / HIVE all services, tools, honeypots, etc. will be installed on to a single host which also serves as a HIVE endpoint. Make sure to meet the system requirements. You can adjust ~/tpotce/docker-compose.yml
to your personal use-case or create your very own configuration using ~/tpotce/compose/customizer.py
for a tailored T-Pot experience to your needs.
Once the installation is finished you can proceed to First Start.
The distributed version of T-Pot requires at least two hosts
Uninstallation of T-Pot is only available on the supported Linux distros.
To uninstall T-Pot run ~/tpotce/uninstall.sh
and follow the uninstaller instructions, you will have to enter your password at least once.
Once the uninstall is finished reboot the machine sudo reboot
Once the T-Pot Installer successfully finishes, the system needs to be rebooted (sudo reboot
). Once rebooted you can log into the system using the user you setup during the installation of the system. Logins are according to the User Types:
<OS_USERNAME>
]You can login via SSH to access the command line: ssh -l <OS_USERNAME> -p 64295 <your.ip>
:
<OS_USERNAME>
]You can also login from your browser and access the T-Pot WebUI and tools: https://<your.ip>:64297
<WEB_USER>
]There is not much to do except to login and check via dps.sh
if all services and honeypots are starting up correctly and login to Kibana and / or Geoip Attack Map to monitor the attacks.
Once you have rebooted the SENSOR as instructed by the installer you can continue with the distributed deployment by logging into HIVE and go to cd ~/tpotce
folder.
If you have not done already generate a SSH key to securely login to the SENSOR and to allow Ansible
to run a playbook on the sensor:
ssh-keygen
, follow the instructions and leave the passphrase empty:
Generating public/private rsa key pair.
Enter file in which to save the key (/home/<your_user>/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/<your_user>/.ssh/id_rsa
Your public key has been saved in /home/<your_user>/.ssh/id_rsa.pub
ssh-copy-id -p 64295 <SENSOR_SSH_USER>@<SENSOR_IP>)
:
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/<your_user>/.ssh/id_rsa.pub"
The authenticity of host '[<SENSOR_IP>]:64295 ([<SENSOR_IP>]:64295)' can't be stablished.
ED25519 key fingerprint is SHA256:naIDxFiw/skPJadTcgmWZQtgt+CdfRbUCoZn5RmkOnQ.
This key is not known by any other names.
Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
<your_user>@172.20.254.124's password:
Number of key(s) added: 1
Now try logging into the machine, with: "ssh -p '64295' '<your_user>@<SENSOR_IP>'"
and check to make sure that only the key(s) you wanted were added.
ssh -p '64295' '<your_user>@<SENSOR_IP>'
../deploy.sh
and follow the instructions.
T-Pot is provided in order to make it accessible to everyone interested in honeypots. By default, the captured data is submitted to a community backend. This community backend uses the data to feed Sicherheitstacho.
You may opt out of the submission by removing the # Ewsposter service
from ~/tpotce/docker-compose.yml
by following these steps:
systemctl stop tpot
~/tpotce/docker-compose.yml
: micro ~/tpotce/docker-compose.yml
CTRL+Q
):
```
ewsposter: container_name: ewsposter restart: always depends_on: tpotinit: condition: service_healthy networks:
systemctl start tpot
It is encouraged not to disable the data submission as it is the main purpose of the community approach - as you all know sharing is caring 😍
As an Opt-In it is possible to share T-Pot data with 3rd party HPFEEDS brokers.
~/tpotce/docker-compose.yml
.ewsposter
section and adjust the HPFEEDS settings to your needs.~/tpotce/data/ews/conf
and set EWS_HPFEEDS_TLSCERT=/data/ews/conf/<your_ca.crt>
.systemctl start tpot
.
Remote access to your host / T-Pot is possible with SSH (on tcp/64295
) and some services and tools come with T-Pot to make some of your research tasks a lot easier.
According to the User Types you can login via SSH to access the command line: ssh -l <OS_USERNAME> -p 64295 <your.ip>
:
<OS_USERNAME>
]According to the User Types you can open the T-Pot Landing Page from your browser via https://<your.ip>:64297
:
<WEB_USER>
]On the T-Pot Landing Page just click on Kibana
and you will be forwarded to Kibana. You can select from a large variety of dashboards and visualizations all tailored to the T-Pot supported honeypots.
On the T-Pot Landing Page just click on Attack Map
and you will be forwarded to the Attack Map. Since the Attack Map utilizes web sockets you may need to re-enter the <WEB_USER>
credentials.
On the T-Pot Landing Page just click on Cyberchef
and you will be forwarded to Cyberchef.
On the T-Pot Landing Page just click on Elastivue
and you will be forwarded to Elastivue.
On the T-Pot Landing Page just click on Spiderfoot
and you will be forwarded to Spiderfoot.
T-Pot offers a configuration file providing variables not only for the docker services (i.e. honeypots and tools) but also for the docker compose environment. The configuration file is hidden in ~/tpoce/.env
. There is also an example file (env.example
) which holds the default configuration.
Before the first start run ~/tpotce/genuser.sh
or setup the WEB_USER
manually as described here.
In ~/tpotce/compose
you will find everything you need to adjust the T-Pot Standard / HIVE installation:
customizer.py
mac_win.yml
mini.yml
mobile.yml
raspberry_showcase.yml
sensor.yml
standard.yml
tpot_services.yml
The .yml
files are docker compose files, each representing a different set of honeypots and tools with tpot_services.yml
being a template for customizer.py
to create a customized docker compose file.
To activate a compose file follow these steps:
systemctl stop tpot
.cp ~/tpotce/compose/<dockercompose.yml> ~/tpotce/docker-compose.yml
.systemctl start tpot
.To create your customized docker compose file:
cd ~/tpotce/compose
.python3 customizer.py
.docker-compose.yml
. As some honeypots and services occupy the same ports it will check if any port conflicts are present and notify regarding the conflicting services. You then can resolve them manually by adjusting docker-compose-custom.yml
or re-run the script.systemctl stop tpot
.cp docker-compose-custom.yml ~/tpotce
and cd ~/tpotce
.docker-compose -f docker-compose-custom.yml up
. In case of errors follow the Docker Compose Specification for mitigation. Most likely it is just a port conflict you can adjust by editing the docker compose file.CTRL-C
to stop the containers and run docker-compose -f docker-compose-custom.yml down -v
.mv ~/tpotce/docker-compose-custom.yml ~/tpotce/docker-compose.yml
.systemctl start tpot
.
T-Pot is designed to be low maintenance. Since almost everything is provided through docker images there is basically nothing you have to do but let it run. We will upgrade the docker images regularly to reduce the risks of compromise; however you should read this section closely.
Should an update fail, opening an issue or a discussion will help to improve things in the future, but the offered solution will always be to perform a fresh install as we simply cannot provide any support for lost data!
T-Pot security depends on the updates provided for the supported Linux distro images. Make sure to review the OS documentation and ensure updates are installed regularly by the OS. By default (~/tpotce/.env
) TPOT_PULL_POLICY=always
will ensure that at every T-Pot start docker will check for new docker images and download them before creating the containers.
T-Pot releases are offered through GitHub and can be pulled using ~/tpotce/update.sh
.
If you made any relevant changes to the T-Pot config files make sure to create a backup first!
Updates may have unforeseen consequences. Create a backup of the machine or the files most valuable to your work!
The update script will …
~/tpotce
folder~/tpotce
to be in sync with the T-Pot master branchews.cfg
from ~/tpotce/data/ews/conf
and the T-Pot configuration (~/tpotce/.env
).The following issues are known, simply follow the described steps to solve them.
Some time ago Docker introduced download rate limits. If you are frequently downloading Docker images via a single or shared IP, the IP address might have exhausted the Docker download rate limit. Login to your Docker account to extend the rate limit.
T-Pot is designed to only run on machines with a single NIC. T-Pot will try to grab the interface with the default route, however it is not guaranteed that this will always succeed. At best use T-Pot on machines with only a single NIC.
The T-Pot service automatically starts and stops on each reboot (which occurs once on a daily basis as setup in sudo crontab -l
during installation).
If you want to manually start the T-Pot service you can do so via systemctl start tpot
and observe via dpsw
the startup of the containers.
The T-Pot service automatically starts and stops on each reboot (which occurs once on a daily basis as setup in sudo crontab -l
during installation).
If you want to manually stop the T-Pot service you can do so via systemctl stop tpot
and observe via dpsw
the shutdown of the containers.
All persistent log files from the honeypots, tools and T-Pot related services are stored in ~/tpotce/data
. This includes collected artifacts which are not transmitted to the Elastic Stack.
All log data stored in the T-Pot Data Folder will be persisted for 30 days by default.
Elasticsearch indices are handled by the tpot
Index Lifecycle Policy which can be adjusted directly in Kibana (make sure to “Include managed system policies”).
By default the tpot
Index Lifecycle Policy keeps the indices for 30 days. This offers a good balance between storage and speed. However you may adjust the policy to your needs.
All log data stored in the T-Pot Data Folder (except for Elasticsearch indices, of course) can be erased by running clean.sh
.
Sometimes things might break beyond repair and it has never been easier to reset a T-Pot to factory defaults (make sure to enter cd ~/tpotce
).
systemctl stop tpot
.~/tpotce/data
folder to a safe place (this is optional, just in case).~/tpotce/data
folder using sudo rm -rf ~/tpotce/data
.cd ~/tpotce/
git reset --hard
~/tpotce/install.sh
.
You can show all T-Pot relevant containers by running dps
or dpsw [interval]
. The interval (s)
will re-run dps.sh
periodically.
Blackhole will run T-Pot in kind of a stealth mode manner without permanent visits of publicly known scanners and thus reducing the possibility of being exposed. While this is of course always a cat and mouse game the blackhole feature is null routing all requests from known mass scanners while still catching the events through Suricata.
The feature is activated by setting TPOT_BLACKHOLE=DISABLED
in ~/tpotce/.env
, then run systemctl stop tpot
and systemctl start tpot
or sudo reboot
.
Enabling this feature will drastically reduce attackers visibility and consequently result in less activity. However as already mentioned it is neither a guarantee for being completely stealth nor will it prevent fingerprinting of some honeypot services.
Nginx (T-Pot WebUI) allows you to add as many <WEB_USER>
accounts as you want (according to the User Types).
To add a new user run ~/tpotce/genuser.sh
which will also update the accounts without the need to restart T-Pot.
To remove users open ~/tpotce/.env
, locate WEB_USER
and remove the corresponding base64 string (to decode: echo <base64_string> | base64 -d
, or open CyberChef and load “From Base64” recipe). For the changes to take effect you need to restart T-Pot using systemctl stop tpot
and systemctl start tpot
or sudo reboot
.
Some T-Pot updates will require you to update the Kibana objects. Either to support new honeypots or to improve existing dashboards or visualizations. Make sure to export first so you do not loose any of your adjustments.
Generally T-Pot is offered as is without any commitment regarding support. Issues and discussions can be opened, but be prepared to include basic necessary info, so the community is able to help.
dps
htop
, docker stats
systemctl stop tpot
grc netstat -tulpen
mi ~/tpotce/docker-compose.yml
docker-compose -f ~/tpotce/docker-compose.yml up
CTRL+C
docker-compose -f ~/tpotce/docker-compose.yml down -v
docker logs -f <container_name>
tpotinit
log: cat ~/tpotce/data/tpotinit.log
The Elastic Stack is hungry for RAM, specifically logstash
and elasticsearch
. If the Elastic Stack is unavailable, does not receive any logs or simply keeps crashing it is most likely a RAM or storage issue.
While T-Pot keeps trying to restart the services / containers run docker logs -f <container_name>
(either logstash
or elasticsearch
) and check if there are any warnings or failures involving RAM.
Storage failures can be identified easier via htop
.
T-Pot is provided as is open source without any commitment regarding support (see the disclaimer).
If you are a security researcher and want to responsibly report an issue please get in touch with our CERT.
Please report issues (errors) on our GitHub Issues, but troubleshoot first. Issues not providing information to address the error will be closed or converted into discussions.
Use the search function first, it is possible a similar issue has been addressed or discussed already, with the solution just a search away.
General questions, ideas, show & tell, etc. can be addressed on our GitHub Discussions.
Use the search function, it is possible a similar discussion has been opened already, with an answer just a search away.
The software that T-Pot is built on uses the following licenses.
GPLv2: conpot, dionaea, honeytrap, suricata
GPLv3: adbhoney, elasticpot, ewsposter, log4pot, fatt, heralding, ipphoney, redishoneypot, sentrypeer, snare, tanner
Apache 2 License: cyberchef, dicompot, elasticsearch, logstash, kibana, docker
MIT license: autoheal, ciscoasa, ddospot, elasticvue, glutton, hellpot, maltrail
Unlicense: endlessh
Other: citrixhoneypot, cowrie, mailoney, Elastic License, Wordpot
AGPL-3.0: honeypots
Public Domain (CC): Harvard Dataverse
Without open source and the development community we are proud to be a part of, T-Pot would not have been possible! Our thanks are extended but not limited to the following people and organizations:
The following companies and organizations
… and of course **you for joining the community!**
Thank you for playing 💖
One of the greatest feedback we have gotten so far is by one of the Conpot developers:
“[…] I highly recommend T-Pot which is … it’s not exactly a swiss army knife .. it’s more like a swiss army soldier, equipped with a swiss army knife. Inside a tank. A swiss tank. […]”
And from @robcowart (creator of ElastiFlow):
“#TPot is one of the most well put together turnkey honeypot solutions. It is a must-have for anyone wanting to analyze and understand the behavior of malicious actors and the threat they pose to your organization.”
Thank you!