Skip to main content

Computers and code.

www.tdpain.net

Tom Pain

Announcing Sinensis - the all new CTF platform from RiceTeaCatPanda

4 min read

Hello! I’m Tom - a member of the RiceTeaCatPanda team and joint project leader on Project Sinensis. Since we announced Sinensis a couple of days ago, I thought I’d give a bit more of an in depth look at the project and our plans for it.

What is Sinensis?

Sinensis is our effort to create a CTF platform that’s the best it possibly can be and is as easy as possible to set up and get working, ultimately making the process of organising and playing a CTF just that little bit easier.

In the past, we’ve used the two most common platforms for running a CTF, CTFx and CTFd, and not been huge fans of either of them. CTFd is… well, CTFd. It’s slow, unwieldy and hard to set up in a manor that ensures it’ll run reliably. On the other hand, there’s CTFx. It’s a very good platform but we’re not massive fans of the tab-based UI as well as the fact that it lacks simple configuration. Things like editing the homepage or creating new dynamic pages require you to dig around inside the PHP source code and change stuff there, since it’s entirely devoid of any HTML templates and documentation. As a result (and just because it’s good fun), we decided we were going to write our own platform from scratch to fix these issues and that we’d call it Sinensis.

How’s it built?

Sinensis has two distinct components - a React frontend and a Go API for the backend. Let’s talk a bit about why we chose these two technologies.

React was the obvious choice for the frontend. For one, it’s got a huge user base, meaning that plenty of people will be able to modify Sinensis for their own needs, or even better, contribute to the project. On top of that, it’s got a good ecosystem and it’s well supported and documented thanks to its backing by Facebook.

The choice of language for the backend was… less straightforward. There were at least four different languages that we considered for their different merits and detriments. Rust and Python were two of them, but in the end we went with Go for its good performance, ease of use and the wide variety of frameworks and libraries that exist. If you’re interested, we’re using the Express.js-inspired Fiber web framework and GORM to interface with the MySQL database that underpins the entire system.

What does it look like right now?

As of today (21st August 2020), we’ve been working on Sinensis for about two months, accumulating about 360 commits across our two repositories. In fact, Sinensis is fully open source and licenced under the GNU AGPL v3 - you can see the project repos here and here. Our aim is to have the project in a functional state in time for our next CTF, RTCP 2021 which goes live on January the 22nd 2021. As for the frontend, here are some super fancy screenshots you can take a nosey at:

Yes, we have a dark mode too. No, we don't expect you to use this.

I’d show you screenshots of the API, but that’s just text and that’s boring. If you really want, you can download some of the latest development builds from here.

What do you want from Sinensis?

Part of the idea for making Sinensis is to make it the best platform out there. To do that, we need your help - we want to know what you want to see in Sinensis! If there’s a feature you think we should consider, or something you want us to make easier to do in Sinensis, let us know in the sinensis-suggestions channel of our Discord server - we’d love to hear from you!

Bonus - why’s it called Sinensis?

Everyone on the RiceTeaCatPanda team likes tea, which is a good thing. In fact, we like tea so much that decided to name our project after it. More specifically, we named it after camellia sinensis, which is the Latin name for the common tea plant.

If you’ve got any other questions or comments about Sinensis, feel free to ask in our Discord server. Thanks for reading!

 

Tom Pain

Pyourls3 - a learning experience

9 min read

Stuff I learnt and the challenges I faced publishing a package as a package publishing newbie

My inspiration for Pyourls3 came from another Python package – python-yourls, an 11 year old API wrapper library for the open source URL shortener, YOURLS. It’s listed on the official awesome list and is a bit useless in today’s day and age being written for Python 2. I set out creating my package, using similar function names to the old, but completely rewriting the program for the new, using the fantastic Requests package that everyone and their dog uses. What resulted was a lightweight and efficient package that is simple and easy to use, marking my first published Python package. This serves to document the stuff I learnt and the challenges I faced along the way, as the focus of this package became on learning.

Writing the package

Of the entire project, this was the simplest part of all. I’d created packages before, but none of them were ever intended for use by anyone, including myself. I started by creating a list of conceivable exceptions that I might need, and my final list contained 5 individual exceptions, ranging from a base exception that all others were based on to an exception thrown if a URL was attempted to be shortened twice.

I went on to create the main package, creating one main class, with 5 functions contained within it. Within these, I expanded upon the functionality of python-yourls, offering a function for retrieval of overall statistics for the YOURLS installation, as well as the standard offering of shortening, expanding and individual URL statistics functions. These posed little challenge, with the __init__ function doing most of the input validation, as most other functions only submit data to the API. If invalid data is put into one of the arguments for a function, the YOURLS API will return an error, which will raise an exception from one of the functions, hence removing the need for so much input validation.

Testing

Because this package is publicly available and published on PyPI, it has to be tested, to ensure the functionality and behaviour remains the same between versions. This proved to be more involved than I initially expected, having never done unit testing in Python before (or at all really). I chose the tool PyTest to write my tests, which the basics of are super simple and easy. You create a file that has the “test” keyword in it, and within that, functions with “test” in their name, which are run as part of a test. If a function raises an error, this is reported when running a test, otherwise it passes. This premise is scalable and flexible, which is why PyTest is so good.

But my package requires the existence of a HTTP server to work properly, which poses an interesting question; how do I test my package using PyTest if I need a webserver running properly? I can’t use an actual YOURLS installation, as to make it work properly I’d need to provide valid admin credentials which would be stored in the GitHub repository, which I’d rather not do. Not to mention that the webserver would have to remain up and running forever to ensure that tests can be executed whenever. The solution to this fiasco is to create a Flask server. If I can setup a Flask server that runs in the background on localhost and can emulate the responses of an actual YOURLS installation it will be good enough to run tests with.

The first order of business was to make sure it was possible to run a Flask server and at the same time make requests to it with only one Python process. This is indeed possible. I began to write the webserver code, and soon I realised I needed a way to transfer two bits of data between two modules, which took again more work than I expected. There is no straightforward way to do it - I tried adding my variable to globals, or sys.argv, but nothing worked. This stumped me for quite some time, until I found a user on StackOverflow suggesting importing a common module into each script I was using at one time, ie. the webserver and the test script. By defining some variables within the third module, setting them to None, and then importing the entirety of the module, I was able to modify the contents of the variable in one file and read from it in another module, immediately. To leverage this, I created two variables to use for data interchange:<

  • One variable to act as a modifier to alter the response from the webserver (useful for specific tests that relied on malformed JSON being returned, or an error being returned from the server for example)
  • Another variable to store all the post arguments specified in the previous request sent to the webserver. This was used to ensure that the program was sending all the arguments it was meant to be sending.

It worked like a charm. The majority of my tests relied on this in some way shape or form.

As for the content of the stuff that the test webserver was sending back, it’s all predefined responses, with the correct HTTP response codes mixed in when needed, which YOURLS uses inconsistently in its API responses, so I used them equally inconsistently (as in I copied what the API does).

It’s all well and good running my tests on my local development environment (Windows 10 in a Python 3.7 venv), but I wanted to support cross-platform compatibility, and I thought I might as well officially support all the versions that Requests does, which is everything post Python 3.4 (I did not support Python 2. I started programming in Python way after Python 3 was released and never did anything with Python 2, so I’m just going to stay away from it).

There was no way I was going to setup three VMs (Windows, Linux and MacOS - is it even possible to make an up to date MacOS VM?) for testing my package, each with three different versions of Python on. Instead, I turned to Travis CI - an automated testing tool which hooks onto the back of a GitHub repository and runs a predetermined set of tests every time there is a commit to it. It takes a bit of configuring but works flawlessly when it’s up and running. It’s also free for open source projects.

Travis CI is configured using a YAML file in the root directory of the target GitHub repository. Once you add this repository to your Travis account, it begins running tests based off the parameters specified in said YAML file. It took a bit of work, but eventually I was able to get a decent setup working, for all three supported Python versions on Windows and Linux, and Python 3.7.4 on MacOS. If any of these tests ever fails, I get an email telling me about it.

Documentation

At this point, I got to enjoy writing some documentation, which was absolutely my least favourite part of the entire project. I used MkDocs to generate my documentation using Markdown files (and more YAML), along with the BOOTSTRA.386, a weird retro theme for Bootstrap that also works with MkDocs. The writing process is as simple as writing a load of markdown, adding some basic entries to a YAML file, and running one command. This produces a folder of HTML and CSS that you just drag and drop into your (actual) webserver, and it just works, which is nice.

Packaging and publishing

If you’re going to bother making a Python package, you might as well put it on PyPI - it’s the standard for package distribution within Python, and PIP makes installing PyPI packages super easy.

To begin with, you have to install the Wheel and Twine packages, and update the Setuptools package that’s included with Python (don’t uninstall this, because doing so completely breaks PIP, and you can’t even reinstall it because it uses Setuptools in every package installation - I learnt this from experience). You’re then going to have to write a setup.py file, which details basic information about the package - its version, a short and long description, the maintainer’s name and email address, the package’s web page, its dependencies and a few other bits and bobs. It again takes some tweaking but given time it can be perfected. I found it helpful to look at a few other packages’ setup.py files to see what was going on in theirs, and nick little snippets in the process.

You’re then ready to compile wheels and source code archives using the Wheel package. This spits out two files into a “dist” directory (one a .whl, another a .tar.gz), which can then be uploaded to PyPI using Twine. Again, it’s as simple as running one command, sticking your PyPI credentials in, and it uploads your package for you, giving you a URL at the end where you can see you can see your package on the PyPI website.

I created a simple batch script that automated deleting old and creating new wheels, tarballs and documentation, as well as uploading the new wheels to PyPI, because I can’t be bothered to remember three commands.

The very last step for me was submitting a pull request on awesome-yourls to get my package listed on the awesome list (and hence their readme.html file included with every YOURLS installation as well as being their homepage on yourls.org).

You can view the project here:
GitHub: https://www.github.com/codemicro/pyourls3
Docs: https://www.tdpain.net/projects/pyourls3