Querying CT Logs, Looking For Certificates
25 Mar 2016 2:46 EST

Recently I wanted to run a complex query across every certificate in the CT logs. That would obviously take some time to process - but I was more interested in ease-of-execution than I was in making things as fast as possible. I ended up using a few tools, and writing a few tools, to make this happen.

Catlfish is a CT Log server that's written by a friend (and CT Gossip coauther). I'm not interested in the log server, just the tools - specifically fetchallcerts.py to download the logs.
fetchallcerts.py requires the log keys, in PEM format. (Not sure why.) Run this tool to download all the logs' keys.
fetchallcerts.py only works on one log at a time. A quick bash script will run this across all logs.

With these tools you can download all the certificates in all the logs except the two logs who use RSA instead of ECC. (That's CNNIC and Venafi.) They come down in zipfiles and take up about 145 GB.

Now we need to process them! For that you can use findcerts.py. The way this script works is using python's multiprocessing (one process per CPU) to process a zipfile at a time. It uses pyasn1 and pyx509 to parse the certificates. You write the filtering function at the top of the file, you can also choose which certs to process (leaf, intermediate(s), and root). You can limit the filtering to a single zip file (for testing) or to a single log (since logs will often contain duplicates of each other.)

The example criteria I have in there looks for a particular domain name. This is a silly criteria - there are much faster ways to look for certs matching a domain name. But if you want to search for a custom extension or combination of extensions - it makes a lot more sense. You can look at pyx509 to see what types of structures are exposed to you.

A word on minutia - pyasn1 is slow. It's full featured but it's slow. On the normal library it took about 18 minutes to process a zip file. By using cython and various other tweaks and tricks in both it and pyx509, I was able to get that down to about 4 minutes, 1.5 if you only process leaf certs. So I'd recommend using my branches of pyasn1 and pyx509.

All in all, it's definetly not the fastest way to do this - but it was the simplest. I can run a query across one of the google logs in about 18 hours, which is fast enough for my curiosity for most things.

All About Tor
14 May 2015 00:04:23 EST

A little bit ago NCC Group North America had an all-hands retreat, and solicited technical talks. I fired off a one-line e-mail: "All About Tor - Everything from the Directory Authorities to the Link Protocol to Pluggable Transports to everything in between." And promptly forgot about it for... a couple months. I ended up building the deck, with a level of detail I thought was about 80% of what I wanted, and gave a dry-run for my 45 minute talk. It ran two full hours.

I cut a bunch of content for the talk, but knew I would need to finish the whole thing and make it available. Which I finally did! The slides are available here, and are released CC Attribution-ShareAlike. The source for the presentation is available in keynote format.

Major thanks to all the folks I bugged to build this, especially Nick Mathewson, and those who gave me feedback on mailing lists.

Thumbnail of slides

An Experimental "RequireCT" Directive for HSTS
20 Feb 2015 10:54:23 EST

A little bit ago, while in London at Real World Crypto and hanging out with some browser and SSL folks, I mentioned the thought "Why isn't there a directive in HSTS to require OCSP Stapling?" (Or really just hard fail on revocation, but pragmatically they're the same thing.) They responded, "I don't know, I don't think anyone's proposed it." I had batted the idea around a little bit, and ended up posting about it, but it didn't get much traction. I still think it's a good idea I'll probably revisit sometime soon... but while thinking more about it and in other conversations a more immediate thought came up.

What about requiring Certificate Transparency?

The motivation behind requiring Certificate Transparency for your site is to ensure that any certificate used to authenticate your site is publicly logged. Excepting, of course, locally installed roots which everyone is upset with because of SuperFish. But that's independent of the topic at hand. As a site operator, you can be certain that no one has compromised or coerced a CA, maybe even the CA you chose to pin to, into issuing a certificate for your site behind your back. Instead, you can see that certificate in a Certificate Transparency log!

Deployment, and Risks

I mentioned pinning. Pinning is a very strong security mechanism, but also a very risky one. You can lock your users out of your site by incorrectly pinning to the wrong Intermediate CA or losing a backup key. Requiring CT is also risky.

But, the risk of footgunning yourself is much lower, and similar to requiring OCSP Stapling, is mostly around getting yourself into a situation where your infrastructure can't cash the check your mouth just wrote. But CT has three ways to get a SCT to the user: a TLS Extension, a Certificate Extension, and an OCSP Extension. So (in theory) there are more ways to support your decision.

In reality, not so much. Certificate Authorities are gearing up to support CT, and if you work closely with one, you may even be able to purchase a cert with embedded SCTs. (DigiCert says all you have to do is contact them, same with Comodo.) So depending on your choice of CA, you may be able to leverage this mechanism.

Getting an SCT into an OCSP response is probably trickier. Not only does this require the cooperation of the CA, but because most CAs purchase software and hardware to run their OCSP responders it also likely requires that vendor to do some development. I'm not aware of any CA that supports this mechanism of delivering SCTs, but I could be wrong. Apparently, Comodo supports the OCSP delivery option! (And it's very easy for them to enable it, which they so very nicely did for ritter.vg. So you can try it out on this site, live.)

Fortunately, the third mechanism is entirely in your control as the server operator. You can deliver the SCTs in a TLS extension if the client requests them. Sounds great right? Heh. Let's go on an adventure. Or, if you want to skip the adventure and just see the screenshots, you can do that too.

Now, to be clear, CT is in its infancy. So things will get easier. In fact right now CT is only deployed to enable Extended Validation indicators in Chrome - and for nothing else. So don't take this blog post as a critique of the system. Rather take it as jumping ahead a couple years and proofing out tomorrow's protection mechanisms, today.

Let's log a cert.

First off, before you can actually deliver SCTs to your clients, you have to get an SCT - so you have to log your cert. There are several functional, operating logs. Submitting a certificate is not done through a web form, but rather an API, as specified in the RFC.

To help with this, I created a tiny python script. You will need to open your site in your browser first, and download your certificate and the entire certificate chain up to a known root. Store them as Base64-encoded .cer or .pem files. They should be ASCII and start with "-----BEGIN CERTIFICATE-----".

Now call submit-cert.py with the certificate chain, in order, starting with the leaf. You can specify a specific log to submit to, or you can just try them all. No harm in that!

./submit-cert.py --cert leaf.cer --cert intermediate1.cer --cert intermediate2.cer --cert root.cer
        Timestamp 1410307847307
        Signature BAMARzBFAiEAzNhW8IUPJY1c8vbLDAmufuppc9mYdBLbtSwTHLrnklACID5iG8kafP8pcxny1yKciiewhg8VRybMR4h3wJlTV3s5
Error communicating with izenpen
        Timestamp 1424321473254
        Signature BAMARzBFAiA091WNEs3R/SWVjRaAlpwUpY0l/YYgUH3sMYBlI4XB9AIhAPVMyODwhig48IpE0EJgzKpdAi/iorBUIuy1qH4qrO5g
Error communicating with digicert
        Timestamp 1406484668045
        Signature BAMARzBFAiEA2TVxYDf30ndQlANozAp+HVQ1IFyfGRjsZPa3TZWeeRcCIFFDpPnHQbxfhXQ7bXtueAFiiGG3HfvWqFnc9L+M/+pt
        Timestamp 1406495947353
        Signature BAMARzBFAiEAvckWLUX2H/p1dPbZmn/kaxeAbAEqehQYsgscJMzrqNYCIGGQaJ0MtG8Z13+nk2sstFAwqN+t8wsAEqNdZZmrL0e0

You can see that we had a few errors: we couldn't submit it to the two CA-run logs: Digicert and Izenpen. I think one of them is restricting IPs and the other may not be accepting the particular root I'm issued off of. No worries though, we got success responses from all 3 Google logs and the other independent log, certly. You'll be able to see your certificates within 24 hours at ctwatch.net which is a monitor and nice front-end for querying the logs. (The only one I'm aware of actually.)

Something else you might notice if you examine things carefully is that your certificate is probably already in the Google logs. (Check ctwatch.net for your cert now.) They pre-seeded their logs with, as far as I can tell, every certificate they've seen on the Internet.

Let's get OpenSSL 1.0.2

Wait, what? Yea, before we can configure Apache to send the TLS extension, we need to make sure our TLS library supports the extension, and in OpenSSL's case, that means 1.0.2. Fortunately, gentoo makes this easy. In general, this entire process is not going to be that difficult if you're already running Apache 2.4 on gentoo - but if you're not... it's probably going to be pretty annoying.

Let's Configure Apache

Okay, now that we're using OpenSSL 1.0.2, we need to configure our webserver to send the TLS extension. This is really where the bleeding edge starts to happen. I'm not aware of any way to do this for nginx or really for anything but Apache. And for Apache, the code isn't even in a stable release, it's in trunk. (And it's not that well tested.) But it does exist, thanks to the efforts of Jeff Trawick.

So if you're willing to compile Apache yourself, you can get it working. You don't have to compile trunk, you can instead patch 2.4 and them compile the module specifically. I discovered that it's pretty easily to automatically add patches to a package in gentoo thanks to /etc/portage/patches so that's even better! (For me anyway.)

The two patches you will need for Apache 2.4 I have here and here. These patches (and this process) are based off Jeff's work but be aware his repo's code is out of date in comparison with httpd's trunk, and his patch didn't work for me out of the box. Jeff updated his patch, and it works out-of-the-box (for me). You can find it here.

As of Apache 2.4.20, you no longer need to patch Apache! Thanks Jeff! For more information check out his github repo.

You do need to compile the Apache module. For that, you need to go check out httpd's trunk. After that you need to build the module which is quite simple. There's a sample command right after the checkout instructions, for me it was:

cd modules/ssl
apxs -ci -I/usr/include/openssl mod_ssl_ct.c ssl_ct_util.c ssl_ct_sct.c ssl_ct_log_config.c

This even goes so far as to install the module for you! Let's go configure it. I wanted to be able to control it with a startup flag, so where the modules were loaded I specified:

<IfDefine SSL_CT>
LoadModule ssl_ct_module modules/mod_ssl_ct.so

Now if you read the module documentation you discover this module does a lot. It uses the command line tools from the certificate-transparency project to automatically submit your certificates, it handles proxies, it does auditing... It's complicated. Instead, we want the simplest thing that works - so we're going to ignore all that functionality and just configure it statically using SCTs we give it ourselves.

I have two VHosts running (which actually caused a bug, pay attention to that if you have multiple VHosts) - This is fixed! I configured it like this:

<IfDefine SSL_CT>
CTSCTStorage   /run/ct-scts

CTStaticSCTs /etc/apache2/ssl/rittervg-leaf.cer /etc/apache2/ssl/rittervg-scts
CTStaticSCTs /etc/apache2/ssl/cryptois-leaf.cer /etc/apache2/ssl/cryptois-scts

The CTSCTStorage directive is required, it's a working directory where it's going to store some temporary files. The CTStaticSCTs directive tells it look in this directory for files ending in .sct for SCTs for that certificate.

So we need to put our SCTs in that directory - but in what format? It's not really documented, but it's the exact SCT structure that's going to go into the extension. You'd think that structure would be documented in the RFC - in fact you'd probably expect it to be right here, where they say SerializedSCT... but no dice, that's not defined. Instead you can find it here but it's not really apparent. I figured it out mostly by reading Chromium's source code.

To aid you in producing these .sct files, I again wrote a little script. You give it the log name, the timestamp, and the signature that you got from the prior script you ran, and it outputs a small little binary file. (You can output it to a file, or output it in base64 for easy copy/pasting between terminals.)

With these .sct files present, you can fire up Apache (if necessary passing -D SSL_CT, perhaps in /etc/conf.d/apache) and see if it starts up! If it does, visit your site in Chrome (which will send the CT TLS Extension) and see if you can load your site. If you can, look in the Origin Information Bubble and see if you have transparency records:

Hopefully you do, and you can click in to see it:

"From an unknown log" appears because Rocketeer and Certly are still pending approval and inclusion

It's a known issue that you don't get the link or SCT viewer on Mac, but it will say "and is publicly auditable".

Requiring Certificate Transparency

That's all well and good and took way more work than you expected, but this only sends CT information, it doesn't require it. For that we need to go, edit, and compile another giant project: Chromium.

Wait seriously? You're going to go patch Chromium?

Hell yea. Like I said in the beginning: requiring CT, today, is a proof of concept of something from the future. We may get to the day where Chrome requires CT for all certificates - both EV and DV. But not yet, and not for several years at least. Today, we need to patch it. But rather than patching it to require CT for every domain on the internet - that would break, AFAIK, every single domain except my two and Digicert's test site - instead we're making it a directive in HSTS that a site operator can specify.

Building Chromium is not trivial. Unless you're on gentoo in which case it's literally how you install the browser. I worked off Chromium 42.0.2292.0 which is already out of date since I started this project 10 days ago. But whatever. I used the manual ebuild commands to pause between the unpack and compile stages to test my edits - unless you use that exact version, you'll almost certainly not get a clean apply of my patch.

The patch to chromium is over here, and it adds support for the HSTS directive requireCT. If present, it will fail a connection to any site that it has noted this for, unless it supplies SCTs in either the TLS Extension, the Certificate Extension, or an OCSP Staple. (The last two are untested, but probably work.)

Here's what it looks like when it fails:

And that, I think, is kinda cool.

Code Execution In Spite of BitLocker
8 Dec 2014 09:02:23 EST

Disk Encryption is “a litany of difficult tradeoffs and messy compromises” as our good friend and mentor Tom Ptacek put it in his blog post. That sounds depressing, but it’s pretty accurate - trying to encrypt an entire hard drive is riddled with constraints. For example:

The last two constraints mean that the ciphertext must be the exact same size as the plaintext. There’s simply no room to store IVs, nonces, counters, or authentication tags. And without any of those things, there’s no way to provide cryptographic authentication in any of the common ways we know how to provide it. No HMACs over the sector and no room for a GCM tag (or OCB, CCM, or EAX, all of which expand the message). Which brings us to…

Poor-Man’s Authentication

Because of the constraints imposed by the disk format, it’s extremely difficult to find a way to correctly authenticate the ciphertext. Instead, disk encryption relies on ‘poor-man’s authentication’.

The best solution is to use poor-man’s authentication: encrypt the data and trust to the fact that changes in the ciphertext do not translate to semantically sensible changes to the plaintext. For example, an attacker can change the ciphertext of an executable, but if the new plaintext is effectively random we can hope that there is a far higher chance that the changes will crash the machine or application rather than doing something the attacker wants.

We are not alone in reaching the conclusion that poor-man’s authentication is the only practical solution to the authentication problem. All other disk-level encryption schemes that we are aware of either provide no authentication at all, or use poor-man’s authentication. To get the best possible poor-man’s authentication we want the BitLocker encryption algorithm to behave like a block cipher with a block size of 512–8192 bytes. This way, if the attacker changes any part of the ciphertext, all of the plaintext for that sector is modified in a random way.

That excerpt comes from an excellent paper by Niels Ferguson of Microsoft in 2006 explaining how BitLocker works. The property of changing a single bit, and it propagating to many more bits, is diffusion and it’s actually a design goal of block ciphers in general. When talking about disk encryption in this post, we’re going to use diffusion to refer to how much changing a single bit (or byte) on an encrypted disk affects the resulting plaintext.

BitLocker in Windows Vista & 7

When BitLocker was first introduced, it operated in AES-CBC with something called the Elephant Diffuser. The BitLocker paper is an excellent reference both on how Elephant works, and why they created it. At its heart, the goal of Elephant is to provide as much diffusion as possible, while still being highly performant.

The paper also includes Microsoft’s Opinion of AES-CBC Mode used by itself. I’m going to just quote:

Any time you want to encrypt data, AES-CBC is a leading candidate. In this case it is not suitable, due to the lack of diffusion in the CBC decryption operation. If the attacker introduces a change d in ciphertext block i, then plaintext block i is randomized, but plaintext block i + 1 is changed by d. In other words, the attacker can flip arbitrary bits in one block at the cost of randomizing the previous block. This can be used to attack executables. You can change the instructions at the start of a function at the cost of damaging whatever data is stored just before the function. With thousands of functions in the code, it should be relatively easy to mount an attack.

The current version of BitLocker [Ed: BitLocker in Vista and Windows 7] implements an option that allows customers to use AES-CBC for the disk encryption. This option is aimed at those few customers that have formal requirements to only use government-approved encryption algorithms. Given the weakness of the poor-man’s authentication in this solution, we do not recommend using it.

BitLocker in Windows 8 & 8.1

BitLocker in Windows 8 and 8.1 uses AES-CBC mode, without the diffuser, by default. It’s actually not even a choice, the option is entirely gone from the Group Policy Editor. (There is a second setting that applies to only “Windows Server 2008, Windows 7, and Windows Vista” that lets you choose Diffuser.) Even using the commandline there’s no way to encrypt a new disk using Diffuser - Manage-BDE says “The encryption methods aes128_Diffuser and aes256_Diffuser are deprecated. Valid volume encryption methods: aes128 and aes256.” However, we can confirm that the code to use Diffuser is still present - disks encrypted under Windows 7 with Diffuser continue to work fine on Windows 8.1.

AES-CBC is the exact mode that Microsoft considered (quoting from above) “unsuitable” in 2006 and “recommended against”. They explicitly said “it should be relatively easy to mount an attack”.

And it is.

As written in the Microsoft paper, the problem comes from the fact that an attacker can modify the ciphertext and perform very fine-grained modification of the resulting plaintext. Flipping a single bit in the ciphertext results reliably scrambles the next plaintext block in an unpredictable way (the rainbow block), and flips the exact same bit in the subsequent plaintext block (the red line):

CBC Mode Bit Flipping Propagation

This type of fine-grained control is exactly what Poor Man’s Authentication is designed to combat. We want any change in the ciphertext to result in entirely unpredictable changes in the plaintext and we want it to affect an extremely large swath of data. This level of fine-grained control allows us to perform targeted scrambling, but more usefully, targeted bitflips.

But what bits do we flip? If the disk is encrypted, don’t we lack any idea of where anything interesting is stored? Yes and no. In our testing, two installations of Windows 8 onto the same format of machine put the system DLLs in identical locations. This behavior is far from guarenteed, but if we do know where a file is expected to be, perhaps through educated guesswork and installing the OS on the same physical hardware, then we will know the location, the ciphertext, and the plaintext. And at that point, we can do more than just flip bits, we can completely rewrite what will be decrypted upon startup. This lets us do much more than what people have suggested around changing a branch condition: we just write arbitrary assembly code. So we did. Below is a short video that shows booting up a Virtual Machine showing a normal unmodified BitLockered disk on Windows 8, shutting it down and modifying the ciphertext on the underlying disk, starting it back up, and achieving arbitrary code execution.

Visit the Youtube Video

This is possible because we knew the location of a specific file on the disk (and therefore the plaintext), calculated what ciphertext would be necessary to write out desired shellcode, and wrote it onto the disk. (The particular file we chose did move around during installation, so we did ‘cheat’ a little - with more time investment, we could change our target to a system dll that hasn’t been patched in Windows Updates or moved since installation.) Upon decryption, 16 bytes were garbled, but we chose the position and assembly code carefully such that the garbled blocks were always skipped over. To give credit where others have demonstrated similar work, this is actually the same type of attack that Jakob Lell demonstrated against LUKS partitions last year.

XTS Mode

The obvious question comes up when discussing disk encryption modes: why not use XTS, a mode specifically designed for disk encryption and standardized and blessed by NIST? XTS is used in LUKS and Truecrypt, and prevents targeted bitflipping attacks. But it’s not perfect. Let’s look at what happens when we flip a single bit in ciphertext encrypted using XTS:

XTS Mode Bit Flipping Propagation

A single bit change completely scrambles the full 16 byte block of the ciphertext, there’s no control over the change. That’s good, right? It’s not bad, but it’s not as good as it could be. Unfortunately, XTS was not considered in the original Elephant paper (it was relatively new in 2006), so we don’t have their thoughts about it in direct comparison to Elephant. But the authors of Elephant evaluated another disk encryption mode that had the same property:

LRW provides some level of poor-man’s authentication, but the relatively small block size of AES (16 bytes) still leaves a lot of freedom for an attacker. For example, there could be a configuration file (or registry entry) with a value that, when set to 0, creates a security hole in the OS. On disk the setting looks something like “enableSomeSecuritySetting=1”. If the start of the value falls on a 16-byte boundary and the attacker randomizes the plaintext value, there is a 2−16 chance that the first two bytes of the plaintext will be 0x30 0x00 which is a string that encodes the ASCII value ’0’.

For BitLocker we want a block cipher whose block size is much larger.

Furthermore, they elaborate upon this in their comments to NIST on XTS, explicitly calling out the small amount of diffusion. A 16-byte scramble is pretty small. It’s only 3-4 assembly instructions. To compare how XTS’ diffusion compares to Elephant’s, we modified a single bit on the disk of a BitLockered Windows 7 installation that corresponded to a file of all zeros. The resulting output shows that 512 bytes (the smallest sector size in use) were modified:

Elephant Bit Flipping Propagation

This amount of diffusion is obviously much larger than 16 bytes. It’s also not perfect - a 512 byte scramble, in the right location, could very well result in a security bypass. Remember, this is all ‘Poor Man’s Authentication’ - we know the solution is not particularly strong, we’re just trying to get the best we can. But it’s still a lot harder to pop calc with.


From talking with Microsoft about this issue, one of the driving factors in this change was performance. Indeed, when BitLocker first came out and was documented, the paper spends a considerable amount of focus on evaluating algorithms based on cycles/byte. Back then, there were no AES instructions built into processors - today there are, and it has likely shifted the bulk of the workload for BitLocker onto the Diffuser. And while we think of computers as becoming more powerful since 2006 - tablets, phones, and embedded devices are not the ‘exception’ but a major target market.

Using Full Disk Encryption (including BitLocker in Windows 8) is clearly better than not - as anyone’s who had a laptop stolen from a rental car knows. Ultimately, I’m extremely curious what requirements the new BitLocker design had placed on it. Disk Encryption is hard, and even XTS (standardized by NIST) has significant drawbacks. With more information about real-world design constraints, the cryptographic community can focus on developing something better than Elephant or XTS.

I’d like to thank Kurtis Miller for his help with Windows shellcode, Justin Troutman for finding relevant information, Jakob Lell for beating me to it by a year, DaveG for being DaveG, and the MSRC.

This post originally appeared on the Cryptography Services blog.

Run Your Own Tor Network
17 Nov 2014 13:00:23 EST

Tor is interesting for a lot of reasons. One of the reasons it's interesting is that the network itself operates, at its core, by mutually distrusting Directory Authorities. These Directory Authorities are run by members of the Tor Project and by trusted outside individuals/groups, such as RiseUp.net and CCC.de. A Directory Authory votes on its view of the network, and collects the votes of the other Directory Authorities. If the majority of authorities vote for something (the inclusion of a relay, marking it as 'Bad', whatever) - it passes the vote.

This infrastructure design is very interesting. The only thing that comes close, that I can think of, is the Bitcoin blockchain or Ripple's ledgers. Compare it to some of the other models:

I think the Directory Authority model is pretty elegant. Relying on the user to make trust decisions doesn't work out so well. A single trusted server, or set of servers, administered by one organization is at risk to a complete compromise in one fell swoop. But seperately managed servers that operate in a majority vote mitigate many concerns.

If one were to take it a step further, one would ensure that no majority of the servers were running the same software stack, to reduce the possbility of a single bug affecting a majority. This is a poor example, because tor relies on OpenSSL and it's not easily swapped out - but the majority of DirAuths had to upgrade when Heartbleed hit. Going even further - there is only one implementation of the DirAuth voting protocol in the tor daemon itself. Certificate Transparency has at least two different implementations for comparison.

But, to be clear - locking a user into a trust decision, even a consensus of mutually distrusting authorities, is still a bad thing. If tor only allowed you to use the official Tor Network - that would be bad. We should be able to change who we trust at any time - Moxie dubs it Trust Agility. It's worth noting that the Tor Network has some amount of trust agility, but it's not perfect. If I want to change the Directory Authorities that I trust I can technically do so, but I will no longer be able to use the official Tor Network because those few thousand relays 'belong' to it, and one cannot set up a network that includes them. (There's been some thoughts that one might be able to, but it would be an unsupported hack, liable to break.) It would be interesting if the codebase could evolve such that a tor node may belong to more than one network at a time. Then an alternate network could flourish, and relay operators could join multiple networks to support other administrative boundaries.

Can I run a tor network?

Tor is open source. There aren't a lot of instructions for actually deploying the Directory Authorities, but what is there is not bad. And you can absolutely run your own tor network. There are actually three different ways to do it. Chutney and shadow are tools designed mostly for setting up test networks for running experiements in labratory conditions. Shadow is specifically designed for running bandwidth tests across large-sized tor networks. So if you want to model a tor network running 50,000 nodes - shadow's your huckleberry.

But if you want to deply an as-authentic tor network as possible, do it manually. It's not all that hard. And if you want to conduct research on tor's protocols, it's a great way to do it safely, instead of actively de-anonymizing real users in the wild. Here are the approxmate steps:

Configure and compile tor, as normal, on all your boxes.
If you're going to run multiple daemons per machine, you may want to use ./configure --prefix=/directory/tor-instance-1 to segment them.

Start configuring a few Directory Authorities.
This step is generating the keys for them and the DirServer lines. Run tor-gencert to generate an identity key. Then run tor --list-fingerprint. Create your DirServer lines like DirServer orport=<port> v3ident=<fingerprint from authority_certificate, no spaces> <ip>:<port> <fingerprint from --list-fingerprint in ABCD EF01 format>. These DirServer lines are what put you onto an alternate tor network instead of the official one. You need one line per Directory Authority, and all DirServer lines need to be in the configuration of every DirAuth, Node, and Client you want to talk to this network.

Finish the Directory Authorites configuration.
You should set SOCKSPort to 0, ORPort to something, and DirPort to something.

You need to set AuthoritativeDirectory and V3AuthoritativeDirectory. You can also set VersioningAuthoritativeDirectory along with RecommendedClientVersions and RecommendedServerVersions - why not. Perhaps you want to copy ConsensusParams out of a recent consensus, also. If you're going to run multiple tor daemons off a single IP address, you should set AuthDirMaxServersPerAddr 0 (0 is unlimited, default is two servers per IP.)

You will also (probably) want to lower the voting times, so you can generate a consensus quicker. I'd suggest, to start off with, V3AuthVotingInterval 5 minutes, V3AuthVoteDelay 30 seconds, and V3AuthDistDelay 30 seconds . You can also set MinUptimeHidServDirectoryV2 to something like 1 hour.

Start up your Directory Authorities.
They should all be running, and you should see stuff like 'Time to vote' and 'Uploaded a vote to...' in the notices.log

You will also see Nobody has voted on the Running flag. Generating and publishing a consensus without Running nodes would make many clients stop working. Not generating a consensus! This is normal. If TestingAuthDirTimeToLearnReachability is not set (and it's not) - a Directory Authority will wait 30 minutes before voting to consider a relay to be Running. You should either wait 30 minutes and be patient, or set AssumeReachable to skip the 30 minute wait. They will shortly begin generating a consensus you can see at http://<ip>:<port>/tor/status-vote/current/consensus

Start adding more nodes.
Configure some Exit and Relay nodes (and optionally Bridges). For each node, you will need to put the DirServer lines. If you're running your nodes in the same /16, you will also need to set EnforceDistinctSubnets 0.

There is one other thing you will need to set for the first few nodes though: AssumeReachable 1. This is because if the consensus has no Exit Nodes, a subtle bug will manifest, and nodes will get in a loop and will not upload their descriptors to the Directory Authorities for inclusion in the consensus. By setting AssumeReachable, we skip the test. (The other option is to set up one of your Directory Authorities as an Exit node.)

Run Depictor.
Depictor is a service that monitors the Directory Authorities and generates a pretty website that will give you a lot of info about your network. (Full disclosure, I wrote depictor, cutting over an older java-based tool called 'Doctor' to python)

At this point, you can add those DirServer lines to some clients and start sending traffic through your network. The only hard thing left is soliciting hundreds to thousands of relay operators to see the value in splitting from the official network to join yours. =)

Add a comment...
required, hidden, gravatared

required, markdown enabled (help)
you type:you see:
[stolen from reddit!](http://reddit.com)stolen from reddit!
* item 1
* item 2
* item 3
  • item 1
  • item 2
  • item 3
> quoted text
quoted text
Lines starting with four spaces
are treated like code:

    if 1 * 2 < 3:
        print "hello, world!"
Lines starting with four spaces
are treated like code:
if 1 * 2 < 3:
    print "hello, world!"
quick links