Planet Debian Administration

16th July 2014

Steve: Rejoining the Debian project

I resigned from the debian project a few years ago, but I'm currently going through the process to un-retire.

Fun times.

(I have more free time now that I'm happily married and my wife works as a Doctor in A&E - having a few late-night shifts in local hospitals.)

16th July 2014 10:37:18 : No comments. Link

27th June 2014

Steve: Git-based DNS hosting

Tags: , ,

So I accidentally released a service which will give you resilient, low-latency, DNS-hosting (which uses Amazons route53 on the backend).

If you'd like to use Git to store your DNS-data, and add/update DNS-entries via a simple "git push" then you should check it out:

27th June 2014 16:44:06 : No comments. Link

10th June 2014

lee: Using Amazon SES with Exim (Submission)

Tags: , , ,

A previous entry from 2011 on the subject on using SES and Exim still comes up in web searches. As with the 2011 config, this assumes a standard Debian/Ubuntu exim4 deployment using a split-file config.

SES now allows authenticated relay via port 25, but also submission (port 587). Logically it makes more sense to think of SES as a Submission host since it uses ESMTPSA and makes modifications to the mail in transit, so I explicitly create a submission transport at /etc/exim4/conf.d/transport/40_local_submission

remote_submission:
  debug_print = "T: remote_submission for $local_part@$domain"
  driver = smtp
  port = 587
  hosts_require_auth = *
  hosts_require_tls = *
Then I add in a router which activates based on the sender at /etc/exim4/conf.d/router/180_local_aws-ses
aws_ses:
  debug_print = "R: send_via_ses for mail from $sender_address"
  driver = manualroute
  host_find_failed = freeze
  domains = ! +local_domains
  senders = AWS_SES_SENDER
  transport = remote_submission
  route_list = * AWS_SES_SERVER
The site specific configuration /etc/exim4/conf.d/main/00_local_aws-ses
## sender email addresses to be routed via SES, one-per-line
AWS_SES_SENDER = lsearch*@;/etc/exim4/ses_senders

## nearest SES ingress point
AWS_SES_SERVER = email-smtp.eu-west-1.amazonaws.com
And assuming the standard auth handling is in place, add a line to /etc/exim4/passwd.client
*.amazonaws.com:Yourusername:Yourpassword
Note, these are SMTP specific credentials and not the AWS credentials previously used. Run update-exim4.conf and then restart exim.
10th June 2014 21:35:33 : No comments. Link

3rd June 2014

Steve: Hire Steve ...

If you're looking for a system administrator, who is very very familiar with Debian GNU/Linux, please do consider getting in touch.

I'm an ex-member of the Debian project, and was a member of the Debian Security Team, which involved handling security updates for the distribution. (This came about as a result of my interest in auditing software, a process which lead to the discovery, and fixing, of numerous flaws in popular open-source applications and servers.)

I'm based in Edinburgh, but I have had many years working remotely, and would be happy to repeat that.

As you can see from my prior submissions I'm very familiar with system administration, including (but not limited to):

3rd June 2014 12:46:28 : No comments. Link

15th May 2014

Steve: Promoting my own creations ..

Tags: , ,

In the past I've written and posted articles upon this site about my own software.

Over time I've become a little less keen on doing so, on the basis that people might think it is too Steve-centric. That said I'd always created this site to document things of interest to myself, and the fact that others enjoy them is a good bonus.

In summary I wrote a mail client, it is console-based like mutt, but it has a real scripting language you can use to do fun things (Lua).

You can find details here, should you have any interest:

Relatedly I've been pondering a "Hire Steve" banner on the top of the site. I've resisted the temptation for the moment, but give it a couple more weeks of unemployment and I may well change my mind..

15th May 2014 13:31:32 : 2 comments.

12th May 2014

simonw: Web Application Vulnerability scanners

Been trying out various web application vulnerabilities scanners, both Open Source and Proprietary.

These are tools that will analyse your website, or in some case an instrumented copy of your site, and identify some types of common security flaws, or in other cases simple omissions to use best practice.

My main goal is to find tools which are easy to integrate into a Continuous Integration process, so ideally looking for scanners with minimal user interaction, with a command line driven batch mode, to beef up the current CI process.

I've not reached any conclusions yet, but it is an interesting landscape. Almost everyone agrees more tools are better, because they are all slightly different, there are inevitably bugs that one finds, that another misses. CPU time is a LOT cheaper than developer, or Pen Tester time, so a lot of automation makes sense, however the significant false positive rate means that more tools do make some more work. This scales better than I expected, because the same issues trip up different tools in the same way.

For example the WordPress action "wp-post-comments.php" returns a "500 Server Error" HTTP response code under some circumstances, and lots of tools leap on this proudly noting they have "crashed" your server, when the response header shows otherwise.

But alas WordPress defaults to 500 status codes when people omit required fields on forms, and other common cases. This WordPress annoyance has been lurking for at least 4 years, and the Lead Developers seem in no hurry to stop it emitting error codes suggesting it has crashed just because someone forgot to enter their email address.

https://core.trac.wordpress.org/ticket/10551

Almost all the tools pick up on this, and arguably rightly so, but it isn't a useful find.


Intercepting Proxies

Intercepting Proxies sit between the browser, and website to test, and thus can easily identify (and modify if needed) any traffic from browser to server. For AJAX, web sockets, and a number of other web technologies this is an essential place to be for spotting certain common vulnerabilities (like failure to validate AJAX server side).

One of our tools of choice is BurpSuite, which is a fantastic little application. You do need to buy the Pro version to do proper automated scanning. Whilst we'll use it for manual, and pre-release testing, its focus is on manual testing, and whilst some folk have tried to drive it from the command line, I suspect down this route lies pain and eternal maintenance.

The OWASP Zed Attack proxy tool is targeted at almost exactly the same space as BurpSuite (Clone?). In true Open Source style it is harder to use, a little rougher around the edges, but allegedly does more, and seems to have a little more momentum and adaptability. Oh and a very responsive lead funded by Mozilla. So far it takes longer to run, and has more false positives, but in my initial tests identify a whole host of minor but legitimate issues, although I rushed it, it still took nearly 4 hours for a small website (CPU time may be cheap but we do want answers before the release date). I suspect many folk who've never used BurpSuite Pro will think ZAP fantastic, and it does look like a little attention to settings will pay dividends in both time to run, and false positive rate.

I suspect either tools can be used in Continuous Integration as a proxy, but I see more effort to support this from the OWASP ZAP lead.


Vulnerability Scanners

In a previous roles where I was working on PHP and Perl websites, predominantly doing simple forms, the tool of choice was Wapiti. My reading around suggests it is still an excellent choice, scoring well in comparisons and being quick and easy to use. The default version of Wapiti in Debian is 1 until Jessie, so you probably want either a dedicated network security distro like Kali in a machine somewhere, or just Debian testing.

I note also that OSSIM vulnerability scanner (OpenVAS) will run wapiti if it is installed, but again because it is based on Squeeze it is wapiti version 1 you get if you use "apt-get". Still if you need a network security scanner installing OSSIM is a lot easier than installing OpenVAS in Debian, sticking in wapiti is a no-brainer as a one line enhancement. OSSIM can run Nikto with a really minor tweak I've discussed on their forum as well.

Nikto still finds a place in my arsenal. Somehow I can't love w3af. Few of the other tools make me love them (except NMAP of course).

Next on my list are some of the more established proprietary tools, but there are a huge number of Open Source tools I simply won't have time to evaluate, so any tips on where to check first appreciated. I don't see an alternative to the Intercepting proxy, or something logically equivalent, there is also a requirement to handle newer features of browsers like Web Sockets and local storage (which I see as a greatly enriched version of cookie poisoning).

As always these tools won't keep you safe, but they may let you know when you are heading in the wrong direction, which is why using them as early in the development process as possible makes sense.
12th May 2014 21:32:39 : No comments. Link

7th May 2014

ajt: Strange Dovecot behaviour solved

I recently had an incident where Dovecot didn't start properly and then the following day it did. Yesterday I had the same scenario and it was really annoying. The core process starts but none of the actual worker children do.

The problem turns out to be a NFS mount to a box that went away, which meant when Dovecot started went looking for email mbox files on the home directory of the users (not NFS mounted) it got stuck. I eventually spotted this by starting Dovecot under strace and it was instantly obvious what the problem was, and a quick force umount of the now-absent filesystem allowed Dovecot to carry on.

7th May 2014 22:08:43 : 1 comment.

14th April 2014

dkg: OTR key replacement (heartbleed)

I'm replacing my OTR key for XMPP because of heartbleed (see below).

If the plain ASCII text below is mangled beyond verification, you can retrieve a copy of it from my web site that should be able to be verified.

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA512

OTR Key Replacement for XMPP dkg@jabber.org
===========================================
Date: 2014-04-14

My main XMPP account is dkg@jabber.org.

I prefer OTR [0] conversations when using XMPP for private
discussions.

I was using irssi to connect to XMPP servers, and irssi relies on
OpenSSL for the TLS connections.  I was using it with versions of
OpenSSL that were vulnerable to the "Heartbleed" attack [1].  It's
possible that my OTR long-term secret key was leaked via this attack.

As a result, I'm changing my OTR key for this account.

The new, correct OTR fingerprint for the XMPP account at dkg@jabber.org is:

  F8953C5D 48ABABA2 F48EE99C D6550A78 A91EF63D

Thanks for taking the time to verify your peers' fingerprints.  Secure
communication is important not only to protect yourself, but also to
protect your friends, their friends and so on.

Happy Hacking,

  --dkg  (Daniel Kahn Gillmor)

Notes:

[0] OTR: https://otr.cypherpunks.ca/
[1] Heartbleed: http://heartbleed.com/
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1

iQJ8BAEBCgBmBQJTTBF+XxSAAAAAAC4AKGlzc3Vlci1mcHJAbm90YXRpb25zLm9w
ZW5wZ3AuZmlmdGhob3JzZW1hbi5uZXRFQjk2OTEyODdBN0FEREUzNzU3RDkxMUVB
NTI0MDFCMTFCRkRGQTVDAAoJEKUkAbEb/fpcYwkQAKLzEnTV1lrK6YrhdvRnuYnh
Bh9Ad2ZY44RQmN+STMEnCJ4OWbn5qx/NrziNVUZN6JddrEvYUOxME6K0mGHdY2KR
yjLYudsBuSMZQ+5crZkE8rjBL8vDj8Dbn3mHyT8bAbB9cmASESeQMu96vni15ePd
2sB7iBofee9YAoiewI+xRvjo2aRX8nbFSykoIusgnYG2qwo2qPaBVOjmoBPB5YRI
PkN0/hAh11Ky0qQ/GUROytp/BMJXZx2rea2xHs0mplZLqJrX400u1Bawllgz3gfV
qQKKNc3st6iHf3F6p6Z0db9NRq+AJ24fTJNcQ+t07vMZHCWM+hTelofvDyBhqG/r
l8e4gdSh/zWTR/7TR3ZYLCiZzU0uYNd0rE3CcxDbnGTUS1ZxooykWBNIPJMl1DUE
zzcrQleLS5tna1b9la3rJWtFIATyO4dvUXXa9wU3c3+Wr60cSXbsK5OCct2KmiWY
fJme0bpM5m1j7B8QwLzKqy/+YgOOJ05QDVbBZwJn1B7rvUYmb968yLQUqO5Q87L4
GvPB1yY+2bLLF2oFMJJzFmhKuAflslRXyKcAhTmtKZY+hUpxoWuVa1qLU3bQCUSE
MlC4Hv6vaq14BEYLeopoSb7THsIcUdRjho+WEKPkryj6aVZM5WnIGIS/4QtYvWpk
3UsXFdVZGfE9rfCOLf0F
=BGa1
-----END PGP SIGNATURE-----
14th April 2014 18:43:20 : No comments. Link

24th February 2014

dkg: Inline-PGP considered harmful

Tags: , , ,
We changed the default PGP signatures generated by enigmail in debian from Inline PGP to PGP/MIME last year, and the experiment has gone well enough that we're now using it in jessie and wheezy (where it arrived as part of a security update to make the extension work with the security-updated icedove package).

After having several people poke me in different contexts about why inline cleartext PGP signatures are a bad idea, i got sufficiently tired of repeating myself, and finally documented some of the problems explicitly.

The report includes a demonstration of a content-tampering attack that changes the meaning of a signed inline-PGP message without breaking the signature, which i first worked out on the notmuch mailing list, but hadn't gotten around to demonstrating until recently.

The attack is demonstrated against clearsigned messages, but also works against inline encrypted messages (but is harder to demonstrate since a demonstration would require sharing secret key material for the decryption step).

Please don't generate Inline-PGP messages. And if you must parse and accept them, please consider carefully the risks you expose your users to and think about ways to mitigate the problems.

24th February 2014 02:09:40 : 4 comments.

19th February 2014

Steve: Easily sharing markdown text

I've just about completed a new toy project:

This is basically a markdown-using pastebin site.

You can paste in your random markdown text, and once you've done so you receive a link you can share with your boss, your friends, or whoever.

The source code is available on Github, and there is a (trusted) docker image too, which makes installation trivial.

Feedback welcome, either here, via mail, or via the github tracker.

I'm pretty pleased with the project, and it is already proving useful.

19th February 2014 09:41:32 : No comments. Link

31st January 2014

Steve: Server Optimization ..

Tags: , , , ,

Contributions and feedback welcome, on my new site:

Much like this one in nature, but much more succinct and focussed.

31st January 2014 19:46:03 : No comments. Link

19th January 2014

Steve: Starting a new job ..

Tags: ,

On Monday (tomorrow) I'll be starting a new job, once again working from home - something I was keen to avoid.

Still it won't be so bad, the only potential concern is that I'll be starting on my home desktop, and at some point during the day I'm expecting a delivery of a new Macbook which I need to switch over to using.

I've never used a Mac, not since system 7, so that'll be "fun".

19th January 2014 11:52:30 : 2 comments.

26th December 2013

fugit: Problem with Bonding and Vlan on Wheezy

Tags: , , ,
The Problem: Using the same configuration that worked under squeeze for Bonding and Vlan with Openvz, on wheezy it is failing. The symptoms are that only the vlan with the default gateway set are working. I can move the default gateway to any vlan and it will work. The vlans work on when communicating to machines on the same vlan. Using tcpdump/wireshark I confirmed that traffic is coming in but never making it out the default GW unless it is the vlan with the default gateway. On the squeeze servers you can see the traffic going out the default GW.

The Solution:
Turns out you need to set net.ipv4.conf.default.rp_filter = 2 (or 0 for no spoof protection). Strict filter results in vlans not on the default gw to be broken. More details and links will be posted later. Unfortunetly I didn't find the links with the solutions till I had found the issue was net.ipv4.conf.default.rp_filter. I originally missed this in testing because you need to restart(networking) after making the changes. I am not sure how I missed this when rebuilding a new clean server with wheezy. When built from scratch with defaults rp_filter = 0. Like most problems it seems pretty obvious once you have the solution. The text in the sysctl.conf file says "Uncomment the next two lines to enable Spoof protection (reverse-path filter)." This pretty clearly was the issue. Sadly I tested twice to make sure change I had made were not causing the problem but the first time failed because I had not restarted the network or the server after reverting the changes to rp_filter. The second time I have no idea how I missed it on a clean build of a new server. After building the server and only changing the network config it presented the same symptoms, obviously I made a change or missed something. Hopefully this post will save someone else some time.

Cisco Setup:
Cisco Hardware
We are using a cisco Nexus 7000 switchs with gigabit ethernet module that supports 802.3ad. For more information regarding the different bonding options you can check out this link

Setup the port channel
                                                                                                                                                                                                            
interface port-channel170                                                                                                                                                                                        
  description servername01                                                                                                                                                                                                                  
  switchport mode trunk                                                                                                                                                                                                                     
  switchport trunk allowed vlan 45,48-49                                                                                                                                                                                                    
  vpc 170                                                                                                                                                                                                                                   
Configure the physical interfaces on the cisco switch:
                                                                                                                                                                                                                                       
interface Ethernet1/11                                                                                                                                                                                                                      
  description servername#1                                                                                                                                                                                                                  
  switchport mode trunk                                                                                                                                                                                                                     
  switchport trunk allowed vlan 45,48-49                                                                                                                                                                                                    
  spanning-tree port type edge                                                                                                                                                                                                              
  channel-group 170 mode active                                                                                                                                                                                                             
  no shutdown                                                                                                                                                                                                                               
                                                                                                                                                                                                                                            
interface Ethernet3/11
  description servername#2
  switchport mode trunk
  switchport trunk allowed vlan 45,48-49
  spanning-tree port type edge
  channel-group 170 mode active
  no shutdown
...
Make sure the the "switchport trunk allowed vlan" has the vlans you are going to be doing on the linux server. Until these matched nothing worked for me.

Server HardWare: The current server we are using is a DL360pG8 which has a broadcom tg3 4 port card. This card has had several reported issues to rule this out I later installed a base wheezy package on an older server that was known to work with our confugration under squeeze and our current Nexus 7000 switch. This produced the same issues reported here. I had also tried using the backport kernel to further rule out drivers, this was before building a new server.
lspci | grep -i broad
03:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5719 Gigabit Ethernet PCIe (rev 01)
...
Linux Network Config:
Install the required pacakges and load bonding module
apt-get install vlan ifenslave
Interfaces Config: /etc/network/interfaces
# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
#allow-hotplug eth0

auto bond0
iface bond0 inet manual
        #bond-mode 802.3ad
        bond-mode 4
        bond-miimon 100
        bond_downdelay 200
        bond_updelay 200
        bond_xmit_hash_policy layer2+3
        bond_lacp_rate slow
        slaves eth0 eth1 eth2 eth3

auto vlan45
iface vlan45 inet static
        vlan_raw_device bond0  
        address 10.200.45.155  
        netmask 255.255.255.0  
        network 10.200.45.0
        broadcast 10.200.45.255

auto vlan48
iface vlan48 inet static
        vlan_raw_device bond0  
        address 10.200.48.121  
        netmask 255.255.255.0  
        network 10.200.48.0
        broadcast 10.200.48.255
        gateway 10.200.48.1

auto vlan49
iface vlan49 inet static
        vlan_raw_device bond0  
        address 10.200.49.155  
        netmask 255.255.255.0  
        network 10.200.49.0
        broadcast 10.200.49.255
I had also ready posts regarding people having problems using the "pretty" or easy to read version above so I also tried the below configuration with the same results.
# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
#allow-hotplug eth0

auto bond0
iface bond0 inet manual
        #bond-mode 802.3ad
        bond-mode 4
        bond-miimon 100
        bond_xmit_hash_policy layer2+3
        bond_lacp_rate slow
        slaves eth0 eth1 eth2 eth3

auto bond0.45
iface bond0.45 inet static
        address 10.200.45.155
        netmask 255.255.255.0

auto bond0.48
iface bond0.48 inet static
        address 10.200.48.121
        netmask 255.255.255.0   
        gateway 10.200.48.1

auto bond0.49
iface bond0.49 inet static
        address 10.200.49.155
        netmask 255.255.255.0
Trouble Shooting:
On Linux
ServerName# cat /proc/net/bonding/bond0
Ethernet Channel Bonding Driver: v3.7.1 (April 27, 2011)

Bonding Mode: IEEE 802.3ad Dynamic link aggregation
Transmit Hash Policy: layer2+3 (2)
MII Status: up
MII Polling Interval (ms): 100  
Up Delay (ms): 200
Down Delay (ms): 200

802.3ad info
LACP rate: slow
Min links: 0
Aggregator selection policy (ad_select): stable
Active Aggregator Info:
        Aggregator ID: 1
        Number of ports: 4
        Actor Key: 17
        Partner Key: 32938
        Partner Mac Address: 00:23:04:ee:be:0a

Slave Interface: eth0
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: d8:9d:67:2c:aa:24
Aggregator ID: 1
Slave queue ID: 0

Slave Interface: eth1
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: d8:9d:67:2c:aa:25
Aggregator ID: 1
Slave queue ID: 0

Slave Interface: eth2
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: d8:9d:67:2c:aa:26
Aggregator ID: 1
Slave queue ID: 0

Slave Interface: eth3
MII Status: up
Speed: 1000 Mbps
Duplex: full
Link Failure Count: 0
Permanent HW addr: d8:9d:67:2c:aa:27
Aggregator ID: 1
Slave queue ID: 0
filename:       /lib/modules/3.2.0-4-amd64/kernel/drivers/net/bonding/bonding.ko
alias:          rtnl-link-bond  
author:         Thomas Davis, tadavis@lbl.gov and many others
description:    Ethernet Channel Bonding Driver, v3.7.1
version:        3.7.1
license:        GPL
srcversion:     0384DF6574E0ED31BA573D8
depends:
intree:         Y
vermagic:       3.2.0-4-amd64 SMP mod_unload modversions
parm:           max_bonds:Max number of bonded devices (int)
parm:           tx_queues:Max number of transmit queues (default = 16) (int)
parm:           num_grat_arp:Number of peer notifications to send on failover event (alias of num_unsol_na) (int)
parm:           num_unsol_na:Number of peer notifications to send on failover event (alias of num_grat_arp) (int)
parm:           miimon:Link check interval in milliseconds (int)
parm:           updelay:Delay before considering link up, in milliseconds (int)
parm:           downdelay:Delay before considering link down, in milliseconds (int)
parm:           use_carrier:Use netif_carrier_ok (vs MII ioctls) in miimon; 0 for off, 1 for on (default) (int)
parm:           mode:Mode of operation; 0 for balance-rr, 1 for active-backup, 2 for balance-xor, 3 for broadcast, 4 for 802.3ad, 5 for balance-tlb, 6 for balance-alb (charp)
parm:           primary:Primary network device to use (charp)
parm:           primary_reselect:Reselect primary slave once it comes up; 0 for always (default), 1 for only if speed of primary is better, 2 for only on active slave failure (charp)
parm:           lacp_rate:LACPDU tx rate to request from 802.3ad partner; 0 for slow, 1 for fast (charp)
parm:           ad_select:803.ad aggregation selection logic; 0 for stable (default), 1 for bandwidth, 2 for count (charp)
parm:           min_links:Minimum number of available links before turning on carrier (int)
parm:           xmit_hash_policy:balance-xor and 802.3ad hashing method; 0 for layer 2 (default), 1 for layer 3+4, 2 for layer 2+3 (charp)
parm:           arp_interval:arp interval in milliseconds (int)
parm:           arp_ip_target:arp targets in n.n.n.n form (array of charp)
parm:           arp_validate:validate src/dst of ARP probes; 0 for none (default), 1 for active, 2 for backup, 3 for all (charp)
parm:           fail_over_mac:For active-backup, do not set all slaves to the same MAC; 0 for none (default), 1 for active, 2 for follow (charp)
parm:           all_slaves_active:Keep all frames received on an interfaceby setting active flag for all slaves; 0 for never (default), 1 for always. (int)
parm:           resend_igmp:Number of IGMP membership reports to send on link failure (int)
I also used tcpdump to determine where the connections were getting lost. I looked at them using wireshark. tcpdump -i any -U not port 22 -w /tmp/tcpdump_any_20131220.dump This showed that traffic was coming in no problem and everything was working except when connecting to vlan's that did not have a default gw and you were not on that vlan. This makes it look like a routing issue within the OS. If anyone would find the dump lines interesting let me know and I can dig them up and post them.
Routing table.
route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
0.0.0.0         10.200.48.1     0.0.0.0         UG    0      0        0 vlan48
10.200.45.0     0.0.0.0         255.255.255.0   U     0      0        0 vlan45
10.200.48.0     0.0.0.0         255.255.255.0   U     0      0        0 vlan48
10.200.49.0     0.0.0.0         255.255.255.0   U     0      0        0 vlan49
 
ip route list
default via 10.200.48.1 dev vlan48
10.200.45.0/24 dev vlan45  proto kernel  scope link  src 10.200.45.155
10.200.48.0/24 dev vlan48  proto kernel  scope link  src 10.200.48.121
10.200.49.0/24 dev vlan49  proto kernel  scope link  src 10.200.49.155
On Cisco
show interface port-channel 170
port-channel170 is up
 vPC Status: Up, vPC number: 170
  Hardware: Port-Channel, address: 44d3.cae5.50a2 (bia 44d3.cae5.50a2)
  Description: servername
  MTU 1500 bytes, BW 2000000 Kbit, DLY 10 usec
  reliability 255/255, txload 1/255, rxload 1/255
  Encapsulation ARPA
  Port mode is trunk
  full-duplex, 1000 Mb/s
  Input flow-control is off, output flow-control is off
  Switchport monitor is off
  EtherType is 0x8100
  Members in this channel: Eth1/11, Eth3/11
  Last clearing of "show interface" counters never
  52 interface resets
  30 seconds input rate 80 bits/sec, 0 packets/sec
  30 seconds output rate 1832 bits/sec, 2 packets/sec
  Load-Interval #2: 5 minute (300 seconds)
    input rate 112 bps, 0 pps; output rate 1.94 Kbps, 2 pps
  RX
    380152 unicast packets  113302 multicast packets  3248 broadcast packets
    496720 input packets  88421937 bytes
    0 jumbo packets  0 storm suppression packets
    0 runts  0 giants  0 CRC  0 no buffer
    0 input error  0 short frame  0 overrun   0 underrun  0 ignored
    0 watchdog  0 bad etype drop  0 bad proto drop  0 if down drop
    0 input with dribble  0 input discard
    0 Rx pause
Loaded Modules
lsmod | egrep '8021q|loop|bond'
8021q                  19291  0
garp                   13193  1 8021q
bonding                79169  0
loop                   22641  0
Links:
discard packets when the route for outbound traffic differs from the route of incoming traffic
linux_vlan_routing
openvz on debian
ubnutu bug report where I found my answer
bondong on debian
bonding on wheezy
bonding on wheezy
broadcom related post tg3
openvz on wheezy
Conclusion:
When you are making changes via sysctl and you use '-p' to load them don't forget to restart networking or the server. When you are in the thick of it remember to make your changes one step at a time so you can find the problem. Don't assume your first hunch is the answer.
26th December 2013 18:15:06 : 1 comment.

21st December 2013

dkg: Kevin M. Igoe should step down from CFRG Co-chair

I've said recently that pervasive surveillance is wrong. I don't think anyone from the NSA should have a leadership position in the development or deployment of Internet communications, because their interests are at odds with the interest of the rest of the Internet. But someone at the NSA is in exactly such a position. They ought to step down.

Here's the background:

The Internet Research Task Force (IRTF) is a body tasked with research into underlying concepts, themes, and technologies related to the Internet as a whole. They act as a research organization that cooperates and complements the engineering and standards-setting activities of the Internet Engineering Task Force (IETF).

The IRTF is divided into issue-specific research groups, each of which has a Chair or Co-Chairs who have "wide discretion in the conduct of Research Group business", and are tasked with organizing the research and discussion, ensuring that the group makes progress on the relevant issues, and communicating the general sense of the results back to the rest of the IRTF and the IETF.

One of the IRTF's research groups specializes in cryptography: the Crypto Forum Research Group (CFRG). There are two current chairs of the CFRG: David McGrew <mcgrew@cisco.com> and Kevin M. Igoe <kmigoe@nsa.gov>. As you can see from his e-mail address, Kevin M. Igoe is affiliated with the National Security Agency (NSA). The NSA itself actively tries to weaken cryptography on the Internet so that they can improve their surveillance, and one of the ways they try to do so is to "influence policies, standards, and specifications".

On the CFRG list yesterday, Trevor Perrin requested the removal of Kevin M. Igoe from his position as Co-chair of the CFRG. Trevor's specific arguments rest heavily on the technical merits of a proposed cryptographic mechanism called Dragonfly key exchange, but I think the focus on Dragonfly itself is the least of the concerns for the IRTF.

I've seconded Trevor's proposal, and asked Kevin directly to step down and to provide us with information about any attempts by the NSA to interfere with or subvert recommendations coming from these standards bodies.

Below is my letter in full:

From: Daniel Kahn Gillmor <dkg@fifthhorseman.net>
To: cfrg@ietf.org, Kevin M. Igoe <kmigoe@nsa.gov>
Date: Sat, 21 Dec 2013 16:29:13 -0500
Subject: Re: [Cfrg] Requesting removal of CFRG co-chair

On 12/20/2013 11:01 AM, Trevor Perrin wrote:
> I'd like to request the removal of Kevin Igoe from CFRG co-chair.

Regardless of the conclusions that anyone comes to about Dragonfly
itself, I agree with Trevor that Kevin M. Igoe, as an employee of the
NSA, should not remain in the role of CFRG co-chair.

While the NSA clearly has a wealth of cryptographic knowledge and
experience that would be useful for the CFRG, the NSA is apparently
engaged in a series of attempts to weaken cryptographic standards and
tools in ways that would facilitate pervasive surveillance of
communication on the Internet.

The IETF's public position in favor of privacy and security rightly
identifies pervasive surveillance on the Internet as a serious problem:

https://www.ietf.org/media/2013-11-07-internet-privacy-and-security.html

The documents Trevor points to (and others from similar stories)
indicate that the NSA is an organization at odds with the goals of the IETF.

While I want the IETF to continue welcoming technical insight and
discussion from everyone, I do not think it is appropriate for anyone
from the NSA to be in a position of coordination or leadership.

----

Kevin, the responsible action for anyone in your position is to
acknowledge the conflict of interest, and step down promptly from the
position of Co-Chair of the CFRG.

If you happen to also subscribe to the broad consensus described in the
IETF's recent announcement -- that is, if you care about privacy and
security on the Internet -- then you should also reveal any NSA activity
you know about that attempts to subvert or weaken the cryptographic
underpinnings of IETF protocols.

Regards,

	--dkg
I'm aware that an abdication by Kevin (or his removal by the IETF chair) would probably not end the NSA's attempts to subvert standards bodies or weaken encryption. They could continue to do so by subterfuge, for example, or by private influence on other public members. We may not be able to stop them from doing this in secret, and the knowledge that they may do so seems likely to cast a pall of suspicion over any IETF and IRTF proceedings in the future. This social damage is serious and troubling, and it marks yet another cost to the NSA's reckless institutional disregard for civil liberties and free communication.

But even if we cannot rule out private NSA influence over standards bodies and discussion, we can certainly explicitly reject any public influence over these critical communications standards by members of an institution so at odds with the core principles of a free society.

Kevin M. Igoe, please step down from the CFRG Co-chair position.

And to anyone (including Kevin) who knows about specific attempts by the NSA to undermine the communications standards we all rely on: please blow the whistle on this kind of activity. Alert a friend, a colleague, or a journalist. Pervasive surveillance is an attack on all of us, and those who resist it are heroes.

21st December 2013 22:55:01 : No comments. Link

18th December 2013

dkg: automatically have uscan check signatures

Tags: , , , , ,
If you maintain software in debian, one of your regular maintenance tasks is checking for new upstream versions, reviewing them, and preparing them for debian if appropriate. One of those steps is often to verify the cryptographic signature on the upstream source archive.

At the moment, most maintainers do the cryptographic check manually, or maybe even don't bother to do it at all. For the common case of detached OpenPGP signatures, though, uscan can now do it for you automatically (as of devscripts version 2.13.3). You just need to tell uscan what keys you expect upstream to be signing with, and how to find the detached signature.

So, for example, Damien Miller recently announced his new key that he will be using to sign OpenSSH releases (his new key has OpenPGP fingerprint 59C2 118E D206 D927 E667 EBE3 D3E5 F56B 6D92 0D30 -- you can verify it has been cross-signed by his older key, and his older key has been revoked with the indication that it was superceded by this one). Having done a reasonable verification of Damien's key, if i was the openssh package maintainer, i'd do the following:

cd ~/src/openssh/
gpg --export '59C2 118E D206 D927 E667  EBE3 D3E5 F56B 6D92 0D30' >> debian/upstream-signing-key.pgp
And then upon noticing that the signature files are named with a simple .asc suffix on the upstream distribution site, we can use the following pgpsigurlmangle option in debian/watch:
version=3
opts=pgpsigurlmangle=s/$/.asc/ ftp://ftp.openbsd.org/pub/OpenBSD/OpenSSH/portable/openssh-(.*)\.tar\.gz 
I've filed this specific example as debian bug #732441. If you notice a package with upstream signatures that aren't currently being checked by uscan (or if you are upstream, you sign your packages, and you want your debian maintainer to verify them), you can file similar bugs. Or, if you maintain a package for debian, you can just fix up your package so that this check is there on the next upload.

If you maintain a package whose upstream doesn't sign their releases, ask them why not -- wouldn't upstream prefer that their downstream users can verify that each release wasn't tampered with?

Of course, none of these checks take the the place of the real work of a debian package maintainer: reviewing the code and the changelogs, thinking about what changes have happened, and how they fit into the broader distribution. But it helps to automate one of the basic safeguards we should all be using. Let's eliminate the possibility that the file was tampered with at the upstream distribution mirror or while in transit over the network. That way, the maintainer's time and energy can be spent where they're more needed.

18th December 2013 03:15:46 : 2 comments.

15th December 2013

lee: Whitelisting hosts in Exim4 causing TLS errors

Tags:

By the end of 2013 the minimum key-size for signed TLS certificates will be 2048 bits. Older 1024 bit certificates will be revoked and web browsers will throw errors on anyone attempting to connect using 1024 bit keys.

Unfortunately, in the context of server-to-server opportunistic encryption, such as used by ESMTPS, this is a dangerous choice. 1024 bit encryption is still far harder to decode than plaintext (which is currently the usual fall-back delivery).

Sadly that's what's happened in some Debian derived releases of Exim4 (using GnuTLS). An affected mail server will likely see something like the following in the logs:

TLS error on connection to mail.example.com [10.81.102.77] (gnutls_handshake): The Diffie-Hellman prime sent by the server is not acceptable (not long enough).

(Even worse, the choice to increase the minimum prime size also breaks connections to many servers that implement EDH - i.e. Ephemeral Diffie-Hellman with smaller encryption keys such as allowed by export.)

The quickest way to check if a remote mail server has a small key is to run a command such as:

$ echo QUIT | openssl s_client -connect mail.example.com:25 -starttls smtp 2>/dev/null| grep "public key is"

If the server public key is less than 2048 bit in size, you may want to contact the postmaster or other administrative contact to let them know they need to generate a new server key.[1] If the key is 2048 bit and above, you may need to use another tool to see what's happening.

As of Exim 4.80 there is a new transport option, tls_dh_min_bits, which lets the minimum key size to be set.

So if this affects you, you could modify the minimum key size for the default transport (add " TLS_DH_MIN_BITS = 512 "). However, for these sort of workarounds it's my preference to add in special-case configuration.

(Filenames assume the split-config of debian deployments.)

Create /etc/exim4/conf.d/transport/40_temp_tls_smallprime (a modified copy of 30_exim4-config_remote_smtp)

remote_smtp_tls_smallprime:
  debug_print = "T: remote_smtp_tls_smallprime for $local_part@$domain"
  driver = smtp
  multi_domain = false
  tls_dh_min_bits = 512
.ifdef REMOTE_SMTP_HEADERS_REWRITE
  headers_rewrite = REMOTE_SMTP_HEADERS_REWRITE
.endif
.ifdef REMOTE_SMTP_RETURN_PATH
  return_path = REMOTE_SMTP_RETURN_PATH
.endif
.ifdef REMOTE_SMTP_HELO_DATA
  helo_data=REMOTE_SMTP_HELO_DATA
.endif
.ifdef DKIM_DOMAIN
dkim_domain = DKIM_DOMAIN
.endif
.ifdef DKIM_SELECTOR
dkim_selector = DKIM_SELECTOR
.endif
.ifdef DKIM_PRIVATE_KEY
dkim_private_key = DKIM_PRIVATE_KEY
.endif
.ifdef DKIM_CANON
dkim_canon = DKIM_CANON
.endif
.ifdef DKIM_STRICT
dkim_strict = DKIM_STRICT
.endif
.ifdef DKIM_SIGN_HEADERS
dkim_sign_headers = DKIM_SIGN_HEADERS
.endif

Then create /etc/exim4/conf.d/router/177_temp_tls_smallprime (i.e. so it appears before the normal external router) with two entries, one for mail servers that have <2048 keys, and ones that negotiate EDH.

dnslookup_tls_smallprime:
  debug_print = "R: dnslookup_tls_smallprime for $local_part@$domain"
  driver = dnslookup
  domains = ! +local_domains : ! +relay_to_domains
  condition = ${if forany{${lookup dnsdb{>: mxh=$domain}}}{match_domain{$item}{+smallkey}}}
  transport = remote_smtp_tls_smallprime
  same_domain_copy_routing = yes
  no_more

dnslookup_tls_export:
  debug_print = "R: dnslookup_tls_edh for $local_part@$domain"
  driver = dnslookup
  domains = ! +local_domains : ! +relay_to_domains
  condition = ${if forany{${lookup dnsdb{>: mxh=$domain}}}{match_domain{$item}{+tlsexport}}}
  transport = remote_smtp_tls_smallprime
  same_domain_copy_routing = yes
  no_more

Then you add in the lists of mail servers to your main config. For example adding the following to /etc/exim4/conf.d/main/10_temp_smallprime

## Servers that present small keys
domainlist smallkey = oldserver.example.com : mx2.example.org
## Hosts that use large keys to sign small "export" size keys for encryption
domainlist tlsexport = *.example.com

[1] It may be more complex than that, of course.[3] While most MTAs operating in client mode would be expected to deal with a 2048 bit key, there are known older email clients that would not be able to cope with keys larger than 1024 bits. Servers may need to maintain backward compatibility for these older clients.

[2] Using GnuTLS to inspect the key negotiation is slightly more involved since you need to do the SMTP negotiation

$ gnutls-cli --crlf --insecure --starttls --port 25 server.example.com 220 server.example.com ESMTP
EHLO my.host.name 250-server.example.com Hello my.host.name 250-AUTH LOGIN 250-AUTH=LOGIN 250-STARTTLS 250 HELP STARTTLS 220 TLS go ahead (Hit Ctrl-D) *** Starting TLS handshake - Ephemeral Diffie-Hellman parameters - Using prime: 768 bits - Secret key: 766 bits - Peer's public key: 767 bits [...] QUIT

[3] You'll note that the example above, adapted from a real "secure email provider", tops out at 768 bits. Which is still the US export limit for asymmetric algorithms used by exported software not in the public domain. They use a 2048 bit key merely to sign the small key used to encrypt the session.

15th December 2013 17:27:21 : No comments. Link

13th December 2013

dkg: OpenPGP Key IDs are not useful

Tags: , , , , ,
Fingerprints and Key IDs OpenPGPv4 fingerprints are made from an SHA-1 digest over the key's public key material, creation date, and some boilerplate. SHA-1 digests are 160 bits in length. The "long key ID" of a key is the last 64 bits of the key's fingerprint. The "short key ID" of a key is the last 32 bits of the key's fingerprint. You can see both of the key IDs as a hash in and of themselves, as "32-bit truncated SHA-1" is a sort of hash (albeit not a cryptographically secure one).

I'm arguing here that short Key IDs and long Key IDs are actually useless, and we should stop using them entirely where we can do so. We certainly should not be exposing normal human users to them.

(Note that I am not arguing that OpenPGP v4 fingerprints themselves are cryptographically insecure. I do not believe that there are any serious cryptographic risks currently associated with OpenPGP v4 fingerprints. This post is about Key IDs specifically, not fingerprints.) Key IDs have serious problems Asheesh pointed out two years ago that OpenPGP short key IDs are bad because they are trivial to replicate. This is called a preimage attack against the short key ID (which is just a truncated fingerprint).

Today, David Leon Gil demonstrated that a collision attack against the long key ID is also trivial. A collision attack differs from a preimage attack in that the attacker gets to generate two different things that both have the same digest. Collision attacks are easier than preimage attacks because of the birthday paradox. dlg's colliding keys are not a surprise, but hopefully the explicit demonstration can serve as a wakeup call to help us improve our infrastructure.

So this is not a way to spoof a specific target's long key ID on its own. But it indicates that it's more of a worry than most people tend to think about or plan for. And remember that for a search space as small as 64-bits (the long key ID), if you want to find a pre-image against any one of 2k keys, your search is actually only in a (64-k)-bit space to find a single pre-image.

The particularly bad news: gpg doesn't cope well with the two keys that have the same long key ID:

0 dkg@alice:~$ gpg --import x
gpg: key B8EBE1AF: public key "9E669861368BCA0BE42DAF7DDDA252EBB8EBE1AF" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)
0 dkg@alice:~$ gpg --import y
gpg: key B8EBE1AF: doesn't match our copy
gpg: Total number processed: 1
2 dkg@alice:~$ 
This probably also means that caff (from the signing-party package) will also choke when trying to deal with these two keys.

I'm sure there are other OpenPGP-related tools that will fail in the face of two keys with matching 64-bit key IDs. We should not use Key IDs I am more convinced than ever that key IDs (both short and long) are actively problematic to real-world use of OpenPGP. We want two things from a key management framework: unforgability, and human-intelligible handles. Key IDs fail at both.

So reasonable tools should not expose either short or long key IDs to users, or use them internally if they can avoid them. They do not have any properties we want, and in the worst case, they actively mislead people or lead them into harm. What reasonable tool should do that? How to replace Key IDs If we're not going to use Key IDs, what should we do instead?

For anything human-facing, we should be using human-intelligible things like user IDs and creation dates. These are trivial to forge, but people can relate to them. This is better than offering the user something that is also trivial to forge, but that people cannot relate to. The job of any key management UI should be to interpret the cryptographic assurances provided by the certifications and present that to the user in a comprehensible way.

For anything not human-facing (e.g. key management data storage, etc), we should be using the full key itself. We'll also want to store the full fingerprint as an index, since that is used for communication and key exchange (e.g. on calling cards).

There remain parts of the spec (e.g. PK-ESK, Issuer subpackets) that make some use of the long key ID in ways that provide some measure of convenience but no real cryptographic security. We should fix the spec to stop using those, and either remove them entirely, or replace them with the full fingerprints. These fixes are not as urgent as the user-facing changes or the critical internal indexing fixes, though.

Key IDs are not useful. We should stop using them.

13th December 2013 20:04:59 : No comments. Link

5th December 2013

dkg: The legal utility of deniability in secure chat

Tags: , , ,
This Monday, I attended a workshop on Multi-party Off the Record Messaging and Deniability hosted by the Calyx Institute. The discussion was a combination of legal and technical people, looking at how the characteristics of this particular technology affect (or do not affect) the law.

This is a report-back, since I know other people wanted to attend. I'm not a lawyer, but I develop software to improve communications security, I care about these questions, and I want other people to be aware of the discussion. I hope I did not misrepresent anything below. I'd be happy if anyone wants to offer corrections. Background

Off the Record Messaging (OTR) is a way to secure instant messaging (e.g. jabber/XMPP, gChat, AIM).

The two most common characteristics people want from a secure instant messaging program are:

Authentication
Each participant should be able to know specifically who the other parties are on the chat.
Confidentiality
The content of the messages should only be intelligible to the parties involved with the chat; it should appear opaque or encrypted to anyone else listening in. Note that confidentiality effectively depends on authentication -- if you don't know who you're talking to, you can't make sensible assertions about confidentiality.

As with many other modern networked encryption schemes, OTR relies on each user maintaining a long-lived "secret key", and publishing a corresponding "public key" for their peers to examine. These keys are critical for providing authentication (and by extension, for confidentiality).

But OTR offers several interesting characteristics beyond the common two. Its most commonly cited characteristics are "forward secrecy" and "deniability".

Forward secrecy
Assuming the parties communicating are operating in good faith, forward secrecy offers protection against a special kind of adversary: one who logs the encrypted chat, and subsequently steals either party's long-term secret key. Without forward secrecy, such an adversary would be able to discover the content of the messages, violating the confidentiality characteristic. With forward secrecy, this adversary is be stymied and the messages remain confidential.
Deniability
Deniability only comes into play when one of the parties is no longer operating in good faith (e.g. their computer is compromised, or they are collaborating with an adversary). In this context, if Alice is chatting with Bob, she does not want Bob to be able to cryptographically prove to anyone else that she made any of the specific statements in the conversation. This is the focus of Monday's discussion.

To be clear, this kind of deniability means Alice can correctly say "you have no cryptographic proof I said X", but it does not let her assert "here is cryptographic proof that I did not say X" (I can't think of any protocol that offers the latter assertion). The opposite of deniability is a cryptographic proof of origin, which usually runs something like "only someone with access to Alice's secret key could have said X."

The traditional two-party OTR protocol has offered both forward secrecy and deniability for years. But deniability in particular is a challenging characteristic to provide for group chat which is the domain of Multi-Party OTR (mpOTR). You can read some past discussion about the challenges of deniability in mpOTR (and why it's harder when there are more than two people chatting) from the otr-users mailing list. If you're not doing anything wrong... The discussion was well-anchored by a comment from another participant who cheekily asked "If you're not doing anything wrong, why do you need to hide your chat at all, let alone be able to deny it?"

The general sense of the room was that we'd all heard this question many times, from many people. There are lots of problems with the ideas behind the question from many perspectives. But just from a legal perspective, there are at least two problems with the way this question is posed:

In these situations, people confront real risk from the law. If we care about these people, we need to figure out if we can build systems to help them reduce that legal risk (of course we also need to fix broken laws, and the legal environment in general, but those approaches were out of scope for this discussion). The Legal Utility of Deniability Monday's meeting was called specifically because it wasn't clear how much real-world usefulness there is in the "deniability" characteristic, and whether this feature is worth the development effort and implementation tradeoffs required. In particular, the group was interested in deniability's utility in legal contexts; many (most?) people in the room were lawyers, and it's also not clear that deniability has much utility outside of a formal legal setting. If your adversary isn't constrained by some rule of law, they probably won't care at all whether there is a cryptographic proof or not that you wrote a particular message (In retrospect, one possible exception is exposure in the media, but we did not discuss that scenario).

Places of possible usefulness So where might deniability come in handy during civil litigation or a criminal trial? Presumably the circumstance is that a piece of a chat log is offered as incriminating evidence, and the defendant is trying to deny something that they appear to have said in the log.

This denial could take place in two rather different contexts: during rules over admissibility of evidence, or (once admitted) in front of a jury.

In legal wrangling over admissibility, apparently a lot of horse-trading can go on -- each side concedes some things in exchange for the other side conceding other things. It appears that cryptographic proof of origin (that is, a lack of deniability) on the chat logs themselves might reduce the amount of leverage a defense lawyer can get from conceding or arguing strongly over that piece of evidence. For example, if the chain of custody of a chat transcript is fuzzy (i.e. the transcript could have been mishandled or modified somehow before reaching trial), then a cryptographic proof of origin would make it much harder for the defense to contest the chat transcript on the grounds of tampering. Deniability would give the defense more bargaining power.

In arguing about already-admitted evidence before a jury, deniability in this sense seems like a job for expert witnesses, who would need to convince the jury of their interpretation of the data. There was a lot of skepticism in the room over this, both around the possibility of most jurors really understanding what OTR's claim of deniability actually means, and on jurors' ability to distinguish this argument from a bogus argument presented by an opposing expert witness who is willing to lie about the nature of the protocol (or who misunderstands it and passes on their misunderstanding to the jury).

The complexity of the tech systems involved in a data-heavy prosecution or civil litigation are themselves opportunities for lawyers to argue (and experts to weigh in) on the general reliability of these systems. Sifting through the quantities of data available and ensuring that the appropriate evidence is actually findable, relevant, and suitably preserved for the jury's inspection is a hard and complicated job, with room for error. OTR's deniability might be one more element in a multi-pronged attack on these data systems.

These are the most compelling arguments for the legal utility of deniability that I took away from the discussion. I confess that they don't seem particularly strong to me, though some level of "avoiding a weaker position when horse-trading" resonates with me.

What about the arguments against its utility? Limitations The most basic argument against OTR's deniability is that courts don't care about cryptographic proof for digital evidence. People are convicted or lose civil cases based on unsigned electronic communications (e.g. normal e-mail, plain chat logs) all the time. OTR's deniability doesn't provide any legal cover stronger than trying to claim you didn't write a given e-mail that appears to have originated from your account. As someone who understands the forgeability of e-mail, i find this overall situation troubling, but it seems to be where we are.

Worse, OTR's deniability doesn't cover whether you had a conversation, just what you said in that conversation. That is, Bob can still cryptographically prove to an adversary (or before a judge or jury) that he had a communication with someone controlling Alice's secret key (which is probably Alice); he just can't prove that Alice herself said any particular part of the conversation he produces.

Additionally, there are runtime tradeoffs depending on how the protocol manages to achieve these features. For example, forward secrecy itself requires an additional round trip or two when compared to authenticated, encrypted communications without forward secrecy (a "round trip" is a message from Alice to Bob followed by a message back from Bob to Alice).

Getting proper deniability into the mpOTR spec might incur extra latency (imagine having to wait 60 seconds after everyone joins before starting a group chat, or a pause in the chat of 15 seconds when a new member joins) or extra computational power (meaning that they might not work well on slower/older devices) or an order of magnitude more bandwidth (meaning that chat might not work at all on a weak connection). There could also simply be complexity that makes it harder to correctly implement a protocol with deniability than an alternate protocol without deniability. Incorrectly-implemented software can put its users at risk.

I don't know enough about the current state of mpOTR to know what the specific tradeoffs are for the deniability feature, but it's clear there will be some. Who decides whether the tradeoffs are worth the feature? Other kinds of deniability Further weakening the case for the legal utility of OTR's deniability, there seem to be other ways to get deniability in a legal context over a chat transcript.

There are deniability arguments that can be made from outside the protocol. For example, you can always claim someone else took control of your computer while you were asleep or using the bathroom or eating dinner, or you can claim that your computer had a virus that exported your secret key and it must have been used by someone else.

If you're desperate enough to sacrifice your digital identity, you could arrange to have your secret key published, at which point anyone can make signed statements with it. Having forward secrecy makes it possible to expose your secret key without exposing the content of your past communications to any listener who happened to log them. Conclusion My takeaway from the discussion is that the legal utility of OTR's deniability is non-zero, but quite low; and that development energy focused on deniability is probably only justified if there are very few costs associated with it.

Several folks pointed out that most communications-security tools are too complicated or inconvenient to use for normal people. If we have limited development energy to spend on securing instant messaging, usability and ubiquity would be a better focus than this form of deniability.

Secure chat systems that take too long to make, that are too complex, or that are too cumbersome are not going to be adopted. But this doesn't mean people won't chat at all -- they'll just use cleartext chat, or maybe they'll use supposedly "secure" protocols with even worse properties: for example, without proper end-to-end authentication (permitting spoofing or impersonation by the server operator or potentially by anyone else); with encryption that is reversible by the chatroom operator or flawed enough to be reversed by any listener with a powerful computer; without forward secrecy; or so on.

As a demonstration of this, we heard some lawyers in the room admit to using Skype to talk with their clients even though they know it's not a safe communications channel because their clients' adversaries might have access to the skype messaging system itself.

My conclusion from the meeting is that there are a few particular situations where deniability could be useful legally, but that overall, it is not where we as a community should be spending our development energy. Perhaps in some future world where all communications are already authenticated, encrypted, and forward-secret by default, we can look into improving our protocols to provide this characteristic, but for now, we really need to work on usability, popularization, and wide deployment. Thanks Many thanks to Nick Merrill for organizing the discussion, to Shayana Kadidal and Stanley Cohen for providing a wealth of legal insight and legal experience, to Tom Ritter for an excellent presentation of the technical details, and to everyone in the group who participated in the interesting and lively discussion.

5th December 2013 23:14:02 : 5 comments.

12th November 2013

Steve: Site reverted ..

Due to a calamity I had to revert the database behind this site a couple of days. Comments and blogs will be lost for this period.

Not pleased about this at all, but I guess it's a first for the site.

In happier news today I shot pictures of some lovely owls.

That was a pleasant diversion from taking pictures of pubs - which is my new project.

12th November 2013 15:47:59 : No comments. Link

3rd November 2013

ajt: Mosh

Tags:

Yesterday I spotted a link to Mosh. I think I've seen it before but for some reason I bothered to read the whole article this time. Mosh is a whole new remote shell tool specially designed to work over mobile and intermittent networks. It's in Debian stable so I installed it and gave it ago. At the moment it's not a replacement for SSH, so you will still need SSH but only to bootstrap the tool.

You start a Mosh session by typing: $ mosh user@server

Just like you would on SSH, in fact that's how is starts, you login to the remote server and start a mosh-server in your name (no root code). Back on the client you then connect to the most-server using the mosh-client. The two ends exchange data using UDP not TCP, and the connection is encrypted by AES-128 in OCB mode.

Each end maintains what it thinks the "screen" should look like, so the client mostly does local echo reducing lag - though there is smart stuff in there to decide when not to. As long as the client and server are still running, being UDP they will re-connect after outage and client IP change as if nothing had happened. Should you become disconnected then when you reconnect the two ends resynchronise the current state the state of the server during the outage is irrelevant and thrown away, works even after the client is suspended and wakes up on a different network.

The developers claim that they offer better UTF terminal support than most other tools, and the modular design of the whole tool makes it easier to extend than SSH and should make security auditing easier.

Anyhow it's interesting, have a look if you have time. mosh.mit.edu. It's not yet a complete SSH replacement but for lots of things it's still very useful, faster than SSH and more robust in real world use. I can't comment on how secure it is, and the authors say they are confident but they are open that it's not had the same review that OpenSSH has.

3rd November 2013 16:57:36 : No comments. Link

30th October 2013

dkg: getting to TLS (STARTTLS HOWTO)

Tags: , , , , , , ,
Many protocols today allow you to upgrade to TLS from within a cleartext version of the protocol. This often falls under the rubric of "STARTTLS", though different protocols have different ways of doing it.

I often forget the exact steps, and when i'm debugging a TLS connection (e.g. with tools like gnutls-cli) i need to poke a remote peer into being ready for a TLS handshake. So i'm noting the different mechanisms here. lines starting with C: are from the client, lines starting with S: are from the server.

many of these are (roughly) built into openssl s_client, using the -starttls option. Sometimes this doesn't work because the handshake needs tuning for a given server; other times you want to do this with a different TLS library. To use the techniques below with gnutls-cli from the gnutls-bin package, just provide the --starttls argument (and the appropriate --port XXX argument), and then hit Ctrl+D when you think it's ok to start the TLS negotiation. SMTP The polite SMTP handshake (on port 25 or port 587) that negotiates a TLS upgrade looks like:

C: EHLO myhostname.example
S: [...]
S: 250-STARTTLS
S: [...]
S: 250 [somefeature]
C: STARTTLS
S: 220 2.0.0 Ready to start TLS
<Client can begin TLS handshake>
IMAP The polite IMAP handshake (on port 143) that negotiates a TLS upgrade looks like:
S: OK [CAPABILITY IMAP4rev1 [...] STARTTLS [...]] [...]
C: A STARTTLS
S: A OK Begin TLS negotiation now
<Client can begin TLS handshake>
POP The polite POP handshake (on port 110) that negotiates a TLS upgrade looks like:
S: +OK POP3 ready
C: STLS
S: +OK Begin TLS 
<Client can begin TLS handshake>
XMPP The polite XMPP handshake (on port 5222 for client-to-server, or port 5269 for server-to-server) that negiotiates a TLS upgrade looks something like (note that the domain requested needs to be the right one):
C: <?xml version="1.0"?><stream:stream to="example.net"
C:  xmlns="jabber:client" xmlns:stream="http://etherx.jabber.org/streams" version="1.0">
S: <?xml version='1.0'?>
S: <stream:stream
S:  xmlns:db='jabber:server:dialback'
S:  xmlns:stream='http://etherx.jabber.org/streams'
S:  version='1.0'
S:  from='example.net'
S:  id='d34edc7c-22bd-44b3-9dba-8162da5b5e72'
S:  xml:lang='en'
S:  xmlns='jabber:server'>
S: <stream:features>
S: <dialback xmlns='urn:xmpp:features:dialback'/>
S: <starttls xmlns='urn:ietf:params:xml:ns:xmpp-tls'/>
S: </stream:features>
C: <starttls xmlns="urn:ietf:params:xml:ns:xmpp-tls" id="1"/>
S: <proceed xmlns='urn:ietf:params:xml:ns:xmpp-tls'/>
<Client can begin TLS handshake>
NNTP RogerBW (in the comments below) points out that NNTP has TLS support:
C: CAPABILITIES
S: [...]
S: STARTTLS
S: [...]
S: .
C: STARTTLS
S: 382 Continue with TLS negotiation
PostgreSQL I got mail from James Cloos suggesting how to negotiate an upgrade to TLS over the PostgreSQL RDBMS. He points to the protocol docs, and in particular, to multiple protocol flow documents, and SSLRequest and StartupMessage chunks of the protocol spec (and clarification that data is sent in network byte order). It won't work in a text-mode communication, but it's worth noting here anyway:

The client starts by sending these eight octets:

0x00 0x00 0x00 0x08 0x04 0xD2 0x16 0x2F
and the server replies with 'S' for secure or 'N' for not. If the reply is S, TLS negotiation follows.

The message represents int32(8) specifying that there are 8 octets and int16(1234),int16(5678). All sent in network order.

(The non-TLS case starts with a similar message with int16(3),int16(0) for protocol version 3.0. Starttls is essentially pg protocol version 1234.5678.) what else?

I don't know (but would like to) how to do:

If you know other mechanisms, or see bugs with the simple handshakes i've posted above, please let me know either by e-mail or on the comments here.

Other interesting notes: RFC 2817, a not-widely-supported mechanism for upgrading to TLS in the middle of a normal HTTP session.

30th October 2013 17:00:45 : 3 comments.

16th October 2013

blackm: I have a Debian server again

Tags: ,
Now I am not only the proud owner of a Debian netbook, I also own a server which is run by Debian. It is a RaspberryPi with a huge USB hard drive conected. Its a nice toy which I don't want to miss. Main purpose is to run owncloud for data exchange and calendar / contact share.

Yes, finally I have a central address book. I failed to set up an LDAP server ten years ago and now finally I have a solution :-)
16th October 2013 20:32:59 : No comments. Link

10th October 2013

simonw: Vodafone Sure Signal - flashing Internet light

The latest Vodafone Sure Signal 3 box or Alcatel-Lucent 9361 Home Cell p3.0 as it says on the box did something a little odd, which I found a load of non-answers for.

When you complete registration, plug it in and the power light stays on, but the Internet light is flashing slowly, most likely explanation is it is downloading an update from the Internet (is the orange light on the ethernet port flashing showing traffic). No idea how big the download it has to download is, but it took a significant time on a 16Mbps download line (more than 20 minutes).

So basically leave it an hour before trying to figure it out, because it is less frustrating than trying to find good documentation for the right version from Vodafone. No really just leave it, if you must fiddle check it has picked up an IP address from your DHCP server or something non-disruptive of it doing a download.
10th October 2013 11:21:25 : No comments. Link

8th October 2013

dkg: Unaccountable surveillance is wrong

Tags: ,
As I mentioned earlier, the information in the documents released by Edward Snowden show a clear pattern of corporate and government abuse of the information networks that are now deeply intertwined with the lives of many people all over the world.

Surveillance is a power dynamic where the party doing the spying has power over the party being surveilled. The surveillance state that results when one party has "Global Cryptologic Dominance" is a seriously bad outcome. The old saw goes "power corrupts, and absolute power corrupts absolutely". In this case, the stated goal of my government appears to be absolute power in this domain, with no constraint on the inevitable corruption. If you are a supporter of any sort of a just social contract (e.g. International Principles on the Application of Human Rights to Communications Surveillance), the situation should be deeply disturbing.

One of the major sub-threads in this discussion is how the NSA and their allies have actively tampered with and weakened the cryptographic infrastructure that everyone relies on for authenticated and confidential communications on the 'net. This kind of malicious work puts everyone's communication at risk, not only those people who the NSA counts among their "targets" (and the NSA's "target" selection methods are themselves fraught with serious problems).

The US government is supposed to take pride in the checks and balances that keep absolute power out of any one particular branch. One of the latest attempts to simulate "checks and balances" was the President's creation of a "Review Group" to oversee the current malefactors. The review group then asked for public comment. A group of technologists (including myself) submitted a comment demanding that the review group provide concrete technical details to independent technologists.

Without knowing the specifics of how the various surveillance mechanisms operate, the public in general can't make informed assessments about what they should consider to be personally safe. And lack of detailed technical knowledge also makes it much harder to mount an effective political or legal opposition to the global surveillance state (e.g. consider the terrible Clapper v. Amnesty International decision, where plaintiffs were denied standing to sue the Director of National Intelligence because they could not demonstrate that they were being surveilled).

It's also worth noting that the advocates for global surveillance do not themselves want to be surveilled, and that (for example) the NSA has tried to obscure as much of their operations as possible, by over-classifying documents, and making spurious claims of "national security". This is where the surveillance power dynamic is most baldly in play, and many parts of the US government intelligence and military apparatus has a long history of acting in bad faith to obscure its activities.

The people who have been operating these surveillance systems should be ashamed of their work, and those who have been overseeing the operation of these systems should be ashamed of themselves. We need to better understand the scope of the damage done to our global infrastructure so we can repair it if we have any hope of avoiding a complete surveillance state in the future. Getting the technical details of these compromises in the hands of the public is one step on the path toward a healthier society.

PostscriptLest I be accused of optimism, let me make clear that fixing the technical harms is necessary, but not sufficient; even if our technical infrastructure had not been deliberately damaged, or if we manage to repair it and stop people from damaging it again, far too many people still regularly accept ubiquitous private (corporate) surveillance. Private surveillance organizations (like Facebook and Google) are too often in a position where their business interests are at odds with their users' interests, and powerful adversaries can use a surveillance organization as a lever against weaker parties.

But helping people to improve their own data sovereignty and to avoid subjecting their friends and allies to private surveillance is a discussion for a separate post, i think.

8th October 2013 20:12:10 : No comments. Link

29th September 2013

Steve: Cluster updated ..

Tags: ,

This weekend I've upgraded the cluster behind this site to be 100% wheezy-based.

No major suprises, fingers crossed nothing breaks ..

29th September 2013 16:31:37 : No comments. Link

28th September 2013

dkg: RIP Cookiepuss

Yesterday, i said a sad goodbye to an old friend at ABC No Rio. Cookiepuss was a steadfast companion in my volunteer shifts at the No Rio computer center, a cranky yet gregarious presence. I met her soon after moving to New York, and have hung out with her nearly every week for years.

[Cookiepuss -- No Dogs No Masters]

She had the run of the building at ABC No Rio, and was friends with all sorts of people. She was known and loved by punks and fine artists, by experimental musicians and bike mechanics, computer geeks and librarians, travelers and homebodies, photographers, screenprinters, anarchists, community organizers, zinesters, activists, performers, and weirdos of all stripes.

For years, she received postcards from all over the world, including several from people who had never even met her in person. In her younger days, she was a ferocious mouser, and even as she shrank with age and lost some of her teeth she remained excited about food.

She was an inveterate complainer; a pants-shredder; a cat remarkably comfortable with dirt; a welcoming presence to newcomers and a friendly old curmudgeon who never seemed to really hold a grudge even when i had to do horrible things like help her trim her nails.

After a long life, she died having said her goodbyes, and surrounded by people who loved her. I couldn't have asked for better, but I miss her fiercely.

28th September 2013 04:28:38 : 1 comment.

25th September 2013

dkg: half a minute for science!

Tags:
A friend is teaching a class on data analysis. She is building a simple and rough data set for the class to examine, and to spur discussion. You can contribute in half a minute! Here's how:
  1. get a stopwatch or other sort of timer (whatever device you're reading this on probably has such a thing).
  2. start the timer, but don't look at it.
  3. wait for what you think is 30 seconds, and then look at the timer
  4. how many actual seconds elapsed?
The data doesn't need to be particularly high-precision (whole second values are fine). The other data points my friend is looking for are age (in years, again, whole numbers are fine) and gender.

You can send me your results by e-mail, (i suspect you can find my address if you're reading this blog). Please put "half a minute for science" in the subject line, and make sure you include:

Science thanks you!
25th September 2013 21:41:30 : 17 comments.

10th September 2013

dkg: Support privacy-respecting network services!

Support privacy-respecting network services! Donate to Riseup.net!

There's a lot of news recently about some downright orwellian surveillance executed across the globe by my own government with the assistance of major American corporations. The scope is huge, and the implications are depressing. It's scary and frustrating for anyone who cares about civil society, freedom of speech, cultural autonomy, or data sovereignty.

As bad as the situation is, though, there are groups like Riseup and May First/People Link who actively resist the data dragnet.

The good birds at Riseup have been tireless advocates for information autonomy for people and groups working for liberatory social change for years. They have provided (and continue to provide) impressive, well-administered infrastructure using free software to help further these goals, and they have a strong political committment to making a better world for all of us and to resisting strongarm attempts to turn over sensitive data. And they provide all this expertise and infrastructure and support on a crazy shoestring of a budget.

So if the news has got you down, or frustrated, or upset, and you want to do something to help improve the situation, you could do a lot worse than sending some much-needed funds to help Riseup maintain an expanding infrastructure. This fundraising campaign will only last a few more days, so give now if you can!

(note: i have worked with some of the riseup birds in the past, and hope to continue to do so in the future. I consider it critically important to have them as active allies in our collective work toward a better world, which is why i'm doing the unusual thing of asking for donations for them on my blog.)

10th September 2013 06:07:11 : No comments. Link

2nd September 2013

Steve: A simple forum ..

I spent a few nights last week prototyping the kind of forum I'd like to use for supporting my Lumail email client.

The end result was something that was fast, clean, and reasonably well-coded. The only aspect where it falls down is in the looks-department.

I'm still in two minds about whether I need a forum to support the application, so for the moment it is just deployed on a random spare domain:

It can be described as a cross between hacker-news and reddit. (Though the example disables arbitrary tagging, which means the reddit-inspiration is less obvious.)

2nd September 2013 19:36:32 : 2 comments.

9th August 2013

lee: Bouncing mail for lavabit.com

Tags:

With no notice, email provider lavabit.com shut down following undisclosed requests from the US government. I find the issue surrounding the shutdown personally concerning, but I've also got a big queue of undelivered mail for (former) lavabit.com customers.

The service allowed people to use their own domains with the service and point the mx records at lavabit's mail servers. These mail servers are no longer responsive - they're not rejecting the mail, just the connection, which means normal smtp servers such as exim will continue to retry.

I'm confident these servers won't be back in any useful form any time soon, therefore anything still using them as MX records is undeliverable right now. Normally I'd use Exim's retry configuration but I want to send a specific message in the rejection.

So instead, I've used a quick-n-dirty router at /etc/exim4/conf.d/router/101_local_lavabit_is_dead

lavabit_is_dead:
  debug_print = "R: lavabit_is_dead for $local_part@$domain"
  driver = redirect
  condition = ${if forany{${lookup dnsdb{>: mxh=$domain}}}{match_domain{$item}{mx.lavabit.com}}}
  allow_fail
  data = :fail: mail for $domain rejected because lavabit.com has been shut down
  no_more
A more traditional accelerated timeout would probably look like:
mx.lavabit.com timeout_connect_MX F,1h,20m
9th August 2013 19:18:51 : No comments. Link

Generated by planet1.debian-administration.org.