Solution for Natas11 for natas wargame on overthewire.org

Posted on September 10, 2015 in Php • Tagged with Wargames, Php, Programming, Security • 2 min read

Solution for Natas web security wargame with by XORing the plaintext with the ciphertext...

Currently I am playing some wargames on overthewire.org.

The first 10 levels were very easy and everyone with some technical knowledge and programming experience should be able to solve them. But somehow I got stuck for a few hours on level 11. The task is to modify a XOR encrypted cookie. For some reason I couldn't figure out how to obtain the xor key that was used.

The challenge was to reverse engineer the key by having the plaintext and the ciphertext. Of course I should have realized very quickly that xoring the plaintext with the ciphertext yields us back the key. But why is this so? Consider the following math:

plaintext xor ciphertext == key <=> plaintext xor (plaintext xor key) <=> plaintext xor plaintext xor key <=> 00000... xor key == key

As you can see, the plaintext cancels out. If the plaintext would be a single byte, say, 1100 1101, then XORing this byte with itself yields:
1100 1101 XOR 1100 1101 -------- 0000 0000

To finally get to solution of the wargame, you can safe the following file as a PHP file and run it:

<?php

function …

Continue reading

Cross platform Lichess Cheat

Posted on August 12, 2015 in Chess • Tagged with Software, Python, Programming, Chess • 5 min read

Edit: Cheat updated on 1.10.2015

Visit Lichess Bot Projects Page for the newest information for this bot! The description and code below will probably not work anymore!


Hello Everyone

Once in a while I like to play Chess on lichess. But sometimes I get beat up tot harshly, such that I want to take some revenge :D. Recently I created a new cheat for lichess. You can find the whole source code on my lichess cheat github repository. If you want to use the cheat, please follow the following tutorial:

  1. Download and install Python 3.4 (or newer) for your operating system from here: python web site
  2. Add Python to your system path such that you can open python file from anywhere (This step depends on what operating system you are using)
  3. Then download the python cheat from here. It is the file with the .py suffix
  4. Then execute the python cheat file where you downloaded it. Just go to the directory where you saved it and enter in a shell: `python cheat_server.py
  5. Open your browser (tested with chrome and firefox) and add the HTTP proxy server in the network settings that is outputted in the …

Continue reading

A lot of work to do for GoogleScraper in the future and request for comments!

Posted on March 01, 2015 in Googlescraper • Tagged with Software, Python, Programming, Googlescraper • 3 min read

Hello dear readers

I get a lot of mail regarding questions about GoogleScraper. I really appreciate them, but at some stage I cannot answer them anymore. In the last weeks I didn't have a lot of time (and motivation I must admit) to put into GoogleScraper.

The reason is, that I am still unconfortable with the architecture of GoogleScraper. There are basically two ways to use the tool:

  • As a command line tool
  • From another program over the API (programming approach)

and furthermore there are 3 very different modes GoogleScraper runs in:

  • http mode
  • selenium mode which again can be divided in Firefox, Chrome and PhantomJS selenium browsers
  • asynchronous mode

whereas I think that selenium is the hardest to work with (very buggy and complex to program in). This leads to a complex software architecture, mainly because the two operational modes (CLI tool and API) have different priorities of how to handle exceptions.

The CLI tool should be VERY robust and it should to everything it can to continue scraping with the remaining ressources (like proxies, RAM, when lots of selenium instances become an issue, networking bandwith, ...), because the user cannot handle these problems by himself when he calls GoogleScraper …


Continue reading

Implementing two Graph traversal algorithms in Python: Depth First Search and Breadth First Search

Posted on January 24, 2015 in Learning • Tagged with Programming, Learning, University • 2 min read

Depth First Search and Breadth First Search

I am right in front of a ton of exams and I need to learn about algorithms and data structures. When I read about pseudocode of Graph traversal algorithms, I thought:
Why not actually implement them in a real programming language? So I did so and now you can study my code now here. I guess this problem was solved a thousand times before, but I learnt something and I hope my approach has some uniqueness to it.

Additionlay, you can also generate a topological order after you traversed the whole Graph, which is a nice little extra.

If you want the most recent version of the code, you can visit its own Github repo here.

Well, here's the code. Just download and run it like this: python graph_traversal.py

# -*- coding: utf-8 -*-

__author__ = 'Nikolai Tschacher'
__version__ = '0.1'
__contact__ = 'admin@incolumitas.com'


import time
from collections import deque

"""
This is just a little representation of two basic graph traversal methods.

    - Depth-First-Search
    - Breadth-First-Search

It's by no means meant to be fast or performant. Rather it is for educational
purposes and to understand it better for myself.
"""


class Node(object):
    """Represents a node …

Continue reading

Very good program to record audio and desktop on Linux!

Posted on January 18, 2015 in Linux • Tagged with Linux, Software • 2 min read

First post in the new year!

Hey

Happy new year to all of you and let 2015 be a succesful year for us all!

My New Year's resolution is to write at least two blog posts every month and try to get my scraping service on scrapeulous.com up and running!

Good program to record the desktop/audio on linux

But what I really wanted to share today is an awesome way to record your desktop with audio on Linux. I tried my luck several times with VLC, but it's a freaking pain in the ass to use. Furthermore, VLC will probably never be able to capture the desktop with audio (See this stackoverflow thread for more info).

But I just found an wonderful alternative (one could almost assume that I am advertisting, which is not the case, I swear!):

http://wiki.ubuntuusers.de/recordMyDesktop

If you want to visit the home page of the program, click here. Although the home page is very ugly and the program is not longer in active development, it just works like a charm. On Ubuntu you may install it like this:

sudo apt-get install recordmydesktop

Then go to a directory very you want …


Continue reading

Scraping and Extracting Links from any major Search Engine like Google, Yandex, Baidu, Bing and Duckduckgo

Posted on November 12, 2014 in Meta • Tagged with Scraping, Baidu, Extracting, Google, Programming, Python, Searchengine, Bing, Meta • 7 min read

Prelude

It's been quite a while since I worked on my projects. But recently I had some motivation and energy left, which is quite nice considering my full time university week and a programming job besides.

I have a little project on GitHub that I worked on every now and again in the last year or so. Recently it got a little bit bigger (I have 115 github stars now, would've never imagined that I ever achieve this) and I receive up to 2 mails with job offers every week (Sorry if I cannot accept any request :( ).

But unfortunately my progress with this project is not as good as I want it to be (that's probably a quite common feeling under us programmers). It's not a problem of missing ideas and features that I want to implement, the hard part is to extend the project without blowing legacy code up. GoogleScraper has grown evolutionary and I am waisting a lot of time to understand my old code. Mostly it's much better to just erease whole modules and reimplement things completely anew. This is essentially what I made with the parsing module.

Parsing SERP pages with many search engines

So I …


Continue reading