Secrets to Good Home Wifi

You open up your laptop, enter your password, and then start your browser. Maybe you click on a movie.

What happens next? For far too many homes, what happens next is your browser starts playing and then rebuffers the movie. Maybe you get kicked off your wifi completely. The ISP gets blamed – and may be at fault, but probably not for the reason you are yelling at them. The problem is most likely not your internet connection. It’s your wireless network. These tips are for experienced IT people who are comfortable reading IT system manuals, but who don’t have strong expertise in wireless 802.11 networking. Wifi is, sadly, not a “plug-and-play” technology – certainly anyone can learn about it and become an expert, but sometimes it’s better to hire an expert. If you need your Wifi to work like your home’s electricity, and always be there when you need it, sometimes you’ll need to hire experts in the field. But I’m going to assume that you are more of a do-it-yourself type of person who is comfortable learning more about their technology, and are able to read and understand your network devices’ manuals. With that in mind, how do we fix this?
Continue reading

Fixing OSX’s Broken External Display Support

I have two external monitors on my iMac – both connected via mini-DP to DP cables. I didn’t want to spend the money Apple charges for displays, so these are third-party displays with specs very similar to Apple’s displays.

But…the text on them looks like crap. It’s jagged, blocky, and just generally ugly – why? The iMac built-in monitor displays text beautifully.

It turns out that if I go to About This Mac, then to System Report, then to Graphics/Displays, I can see the problem: OSX thinks my two external monitors are televisions. This has color space issues and causes anti-aliasing of text to be disabled. In other words, the display noticeably looks like crap.

I suspect this is a way for Apple to discourage use of third party hardware – I’d like to believe it isn’t, but since my displays have a native resolution unlike that of any TV, it would be simple for Apple to differentiate between TV and monitor. Even better, they could provide an easy interface to override this guess. Besides, Windows and Linux look beautiful on these monitors without any tweaking, so unless anyone thinks Apple lacks the brilliant minds of Microsoft, this isn’t an impossible problem to solve – particularly because Apple has had many years to fix it.

Back to the topic, how do you fix it? Fortunately, others have figured out how to deal with this problem – rather than repeating the solution here, just go to the I Reckon blog and read Force RGB mode in Mac OS X to fix the picture quality of an external monitor .

Making PAR::Packer Work with RPM

I needed to distribute some RPMs with a PAR::Packer created (using pp) executable.

If you don’t know what PAR::Packer is, it is the incredibly awesome Perl 5 utility that allows you to create an executable that contains your Perl interpreter, your code, and any libraries you need.  It’s basically one of the ways you have to escape dependency hell with Perl – you can use all the wonderful CPAN modules without fear. You can use the new features of recent versions of Perl (such as subroutine signatures and enhanced performance), without worrying about what version of Perl is installed on someone else’s machine. You don’t need to worry about missing core modules in Fedora and other platforms (Fedora is missing autodie – an essential module for lazy programmers).  So building an archived executable is a great way to distribute Perl apps to people not comfortable with the Perl toolchain.

You can think of it like a Perl compiler (it’s not, but the distinction isn’t terribly important for this context).

It’s a somewhat finicky beast, and can take some effort to get it to package things the way you need them, but it’s a standard trick of the trade when I can’t just use perlbrew to build what I want on a machine. It certainly is a lot quicker to install a single executable than to compile and test hundreds of CPAN modules!

So, it seems natural to combine this with RPM. This way, users can install my stuff just by doing a simple rpm --install <file>.rpm – that’s awesome.  Well, it would be, but building an RPM with a PAR::Packer built executable doesn’t actually work, at least not on Red Hat Enterprise Linux 5 (RHEL5) or RHEL6.

Let me explain….

Take this script, hello.pl:

#!/usr/bin/env perl

use v5.22; # Use perl 5.22's features
use strict;
use warnings;

use autodie; # This causes issues on Fedora out of the box

MAIN: {
    say reverse (split //, "!dlroW olleH");
}

If I run it on most distributions, line 3 will give me an error (the OS Perl isn’t new enough). On Red Hat/Fedora installs, if I changed line 3 to support older perl interpreters, the inclusion of autodie in line 7 will error out, unless that module has already been installed.

For instance:

perldemo:demo$ ./hello.pl
Perl v5.22.0 required--this is only v5.18.2, stopped at hello.pl line 3.
BEGIN failed--compilation aborted at hello.pl line 3.

Of course it works fine when I use my local perl interpreter (a perlbrew-built 5.22.1):

perldemo:demo$ perl hello.pl
Hello World!

So, I can create a packed executable, which runs great (at least on other systems with the same system libraries – the same limitations a an executable you dynamically link and build using C):

perldemo:demo$ pp -o hello hello.pl
perldemo:demo$ ./hello
Hello World!

Perfect! Note that I ran the executable hello, not hello.pl. The hello file is a self-contained executable that doesn’t depend on the system perl.

So what happens when I put this in an RPM package?  I won’t go through the steps of building an RPM, but will let you see what error you will get when you try running the executable:

perldemo:demo$ ./hello
Usage: ./hello [ -Alib.par ] [ -Idir ] [ -Mmodule ] [ src.par ] [ program.pl ]
./hello [ -B|-b ] [-Ooutfile] src.par

What the heck? It’s expecting a PAR file to be passed to it – the PAR, or Perl ARchive, of course contains the perl interpreter, your code, and your modules – for some reason the file can’t find it.

You can duplicate this without RPM by using strip on the output of pp on Linux:

perldemo:demo$ pp -o hello hello.pl
perldemo:demo$ ./hello
Hello World!
perldemo:demo$ strip ./hello
perldemo:demo$ ./hello
Usage: ./hello [ -Alib.par ] [ -Idir ] [ -Mmodule ] [ src.par ] [ program.pl ]
./hello [ -B|-b ] [-Ooutfile] src.par

What is going on? The strip removes debugging information and the like, but generally isn’t expected to change how your program runs – but in this case it seems to be removing a lot more. Running this same strip on OSX (Apple) is enlightening:

perldemo:demo$ strip hello
/Applications/Xcode.app/Contents/Developer/Toolchains/XcodeDefault.xctoolchain/usr/bin/strip: the __LINKEDIT segment does not cover the end of the file (can't be processed) in: /Users/jmaslak/pp/hello
perldemo:demo$ ./hello
Hello World!

So it appears Macs won’t strip the archive out of the extraction executable produced by pp – but Linux has no problem happily stripping it.

It turns out that standard configurations of rpmbuild on at least RHEL5 and RHEL6 automatically strip all binaries in the binary package (that’s the “standard” package you would install).

So how do you disable this evil? It turns out it’s not easy. On RHEL6, you add this near the top of your SPEC file (the SPEC file rpmbuild uses to create the RPM):

%global __os_install_post %{nil}

Now, rpmbuild won’t strip the file. But this doesn’t help RHEL5. So I added another line, that did what I want (assuming I don’t need a debug package, which I don’t in this case):

%define debug_package %{nil}
%global __os_install_post %{nil}

Sure enough, now my code does exactly what I want, even when I install the PAR executable via RPM:

perldemo:demo$ ./hello
Hello World!

Whew!

A Fix for Perl SSL on MacOS X 10.11

UPDATE (August 12, 2017) – Check out Tom Vander Aa’s comment in the comments. His solution is better than my solution in the original post – it has much less chance of breaking other stuff that uses OpenSSL.

ORIGINAL:

When trying to install some Perl Programming Language modules on MacOS X, using a perlbrew-built perl, I was getting some weird linking errors with various OpenSSL headers.  Here’s an example from Net::SSLeay:

laptop$ cpan install Net::SSLeay
...
Configuring M/MI/MIKEM/Net-SSLeay-1.72.tar.gz with Makefile.PL
CPAN::Reporter not installed.  No reports will be sent.
*** Found OpenSSL-0.9.8z installed in /usr
*** Be sure to use the same compiler and options to compile your OpenSSL, perl,
    and Net::SSLeay. Mixing and matching compilers is not supported.
...
cc -c   -fno-common -DPERL_DARWIN -fno-strict-aliasing -pipe -fstack-protector-strong -I/usr/local/include -O3   -DVERSION=\"1.72\" -DXS_VERSION=\"1.72\"  "-I/Users/jmaslak/perl5/perlbrew/perls/perl-5.22.1/lib/5.22.1/darwin-2level/CORE"   SSLeay.c
SSLeay.xs:163:10: fatal error: 'openssl/err.h' file not found
#include <openssl/err.h>
         ^
1 error generated.
make: *** [SSLeay.o] Error 1
  MIKEM/Net-SSLeay-1.72.tar.gz
  /usr/bin/make -- NOT OK

I highlighted the interesting parts – I run homebrew on my Mac to manage development tools and the like. Sure enough, it had installed openssl already:

laptop$ brew install openssl
Warning: openssl-1.0.2e already installed

Hmmm, version 1.0.2e, which doesn’t look like the 0.9.8z that Net::SSLeay found.

A bit of Googling and scratching my head and I found the magic incantation:

laptop$ brew link openssl --force

Once that was done, I could successfully install Net::SSLeay and other SSL Perl modules.  I’m guessing that the links broke sometime during an OS upgrade. Hopefully this post will save you a bit of time!

Perl is Good for Nothing

I love Perl – and the perl interpreter always impresses me.  Today, I decided to try a few languages to see how they compare.

How well does the language do nothing?  I decided to test this out on my fairly speedy Macbook Pro.  All tests were executed multiple times and the best result was used for this post, to account for various caching speedups.

First, C:

do-nothing$ touch nothing.c
do-nothing$ time clang nothing.c
Undefined symbols for architecture x86_64:
  "_main", referenced from:
     implicit entry/start for main executable
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)

real	0m0.041s
user	0m0.017s
sys	0m0.017s

So, C can spit out an error that will probably not make sense to people new to the language (it’s missing a main function, although usually you’ll define main without a leading underscore – for reasons I won’t get into here).  But it was fairly quick – about 40ms (I repeated this several times to account for caching).

Next I tried Java (using the SunOracle implementation):

do-nothing$ touch nothing.java
do-nothing$ time javac nothing.java

real 0m0.759s
user 0m1.323s
sys 0m0.105s

Java doesn’t throw any errors, but it takes over 750ms to compile nothing (in a somewhat satisfyingly mathematically pure way, it literally produces nothing – no output files are created). When I ran javac with the -verbose option (how a Unix workstation company would think long options with a single hypen are okay is beyond me, but I digress), it spits out some timing information. It takes 23ms to parse nothing and roughly 290ms to do the compilation. I can only assume the other 450ms or so are going to compiler startup overhead.

How about Ruby?

do-nothing$ touch nothing.rb
do-nothing$ time ruby nothing.rb

real 0m0.073s
user 0m0.053s
sys 0m0.010s
do-nothing$

It takes 73ms to do nothing, but it does properly do nothing.

How about Python (v2)?

do-nothing:t$ touch nothing.py
do-nothing:t$ time python nothing.py

real 0m0.018s
user 0m0.009s
sys 0m0.007s

Python does nothing pretty darn well – 18ms!

Now, my language of choice, Perl 5:

do-nothing$ touch nothing.pl
do-nothing$ time perl nothing.pl

real 0m0.006s
user 0m0.002s
sys 0m0.003s
do-nothing$

Brilliant – it does nothing very quickly compared to other languages – 6ms.

That said, my old version (Christmas) of the Rokudo-based Perl 6 takes roughly 250ms – not all that good. I’m not sure how the newer versions do. It’s certainly a powerful new language (you should think of Perl 5 and Perl 6 as distinct language – both are being actively developed with new features, optimizations, bug fixes, etc, added to both continually, with no plans to discontinue development on either).

So, It think, in conclusion:

  • C isn’t good for nothing
  • Java can’t do nothing quickly
  • Perl 6 can do nothing, but not too quickly
  • Ruby seems okay for nothing, while Python 2 is pretty darn good at nothing
  • Perl 5 is good for nothing!

(Yes, this post is 90% jest – startup time of the tools is important, but is almost always a dumb reason to pick a language)

Securing Against OpenSSH’s Less-Then-Perfect Defaults

There’s a general security principle: Lock everything up, and then unlock only what needs to be unlocked.

However, this is in contrast to OpenSSH‘s defaults for cryptography, which are, “Open most things up so that old clients can connect.”

Combination Lock

Photo by Self

For those who don’t know, OpenSSH is the most widely used SSH (Secure Shell) server – a tool used by sysadmins (and others) to access remote computers. It’s also the foundation of most Git (a version control system for software developers) security systems. It secures up some of the most sensitive parts of our systems: our intellectual property and “root” access.

Now, none of SSH’s choices are truly horrible, but the defaults certainly aren’t the strongest they could be, either. SSH defaults to allowing several less secure algorithms, along with some very strong algorithms. It does this for backwards compatibility.

However, backwards compatibility is not relevant to SSH in many environments.  Do you really need to support older, and somewhat less secure, cryptography algorithms because some admin might be using an ancient version of Putty?  Perhaps it’s time to get that person to upgrade!

So, here’s some quick notes – you can find more elsewhere – on OpenSSH. You might need to loosen some of these settings in your environment, but at least they’ll start fairly secure. I’ll put up another post on Apache at a later date.

For SSH:

Note you may have to alter paths or other items in the below snippets. Also note that I assume you have reasonable defaults from recent versions of OpenSSH – I.E. no version 1 support, no rsh fallback, etc.

In your sshd_config file (typically in /etc/ssh/sshd_config):

HostKey /etc/ssh/ssh_host_ed25519_key
HostKey /etc/ssh/ssh_host_rsa_key
# HostKey /etc/ssh/ssh_host_dsa_key
# HostKey /etc/ssh/ssh_host_ecdsa_key
KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256
Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-ripemd160-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,hmac-ripemd160,umac-128@openssh.com

What does this do? First, we comment out the DSA and ECDSA keys. DSA is on the cusp of being insecure (max key length is 1024 bits) while ECDSA uses NIST-supplied curves, which some believe have been backdoored by the NSA.

Next, I enabled two key exchange algorithms. Notably, I didn’t enable the SHA1 versions of the DH algorithms, or the ECDH algorithms with NIST curves (which some believe are backdoored by the NSA). Why use a known-to-be potentially less secure algorithm when there are algorithms that are secure?

For encryption, I picked a subset of ciphers I believe to be secure and re-ordered them more appropriate to my needs (but, again, your needs may differ). The big deal was that I didn’t want 3DES, CAST128, or RC4 (arcfour) in the list – these all have problems and shouldn’t be used if you don’t really need them.

For MACs (this verifies you are talking to the person you think you’re talking to, and someone isn’t changing the ciphertext), I removed all MD5 MACs and any MAC less than 128 bits. MD5 is known to have issues, so why use it? And longer MACs are good in this context.

I also like to use PasswordAuthentication no, assuming I’m in an environment where I can force the use of SSH keys or other non-password authentication choices. Passwords can be guessed, and you can bet you’ll see thousands of attempts to guess them, should your firewall allow people on the internet to try. But of course you need to set this up in advance.

Finally, there are two other things that need to be done – creating a stronger RSA host key and selecting non-standard Diffie Hellman parameters (this makes your system secure against Logjam).

The standard RSA key, at least on my Ubuntu system, was a 2048 bit key. That’s not a bad length, but I wanted something more secure. To do this:

ssh-keygen -b 4096 -t rsa -f /etc/ssh/ssh_host_rsa_key

This generates a 4096 bit RSA key. Note that users that had the old key stored in their .ssh/known_hosts file will have to remove the old key and add the new one (and they are validating signatures, right? As an alternative you might store the fingerprints in a DNSSEC-secured DNS zone).

To generate the non-standard DH paraemters (note this takes several hours on my machine):

ssh-keygen -b 2048 -G moduli.candidates -M 127
ssh-keygen -T moduli.2048 -f moduli.candidates
ssh-keygen -b 4096 -G moduli.candidates -M 127
ssh-keygen -T moduli.4096 -f moduli.candidates
cat moduli.2048 moduli.4096 >/etc/ssh/moduli

Of course you should read and understand what these do – not just take my word for it!

You’ll need to restart sshd after doing this (on Ubuntu, service ssh restart). Make sure you have a way of recovering access if you lock yourself out over SSH!

Finally, the changes you made to your sshd_config, particularly the algorithm choices, can be added to your ssh_config so your clients don’t try to use less secure algorithms. In the “Host *” section, add:

    KexAlgorithms curve25519-sha256@libssh.org,diffie-hellman-group-exchange-sha256
    Ciphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr
    MACs hmac-sha2-512-etm@openssh.com,hmac-sha2-256-etm@openssh.com,hmac-ripemd160-etm@openssh.com,umac-128-etm@openssh.com,hmac-sha2-512,hmac-sha2-256,hmac-ripemd160,umac-128@openssh.com

Of course these changes may prevent older clients from connecting to servers, or your client from connecting to older servers. If that’s the case, you need to rework the defaults (for the new client, old server problem, you can use a host-specific override in your ssh_config).

Finally, generate a longer user key pairs if you use SSH key-based authentication (you do this as the user you are generating the key for):

ssh-keygen -b 4096

The default key length is 2048, which is good, but 4096 is a bit better.

Will this keep the NSA out? Probably not, but at least you’re using better-than-default practices now.

Debian/Ubuntu, systemd, NTP, and something called timesyncd

Unix, for years, has had a program called ntpd to use the NTP (Network Time Protocol) service to set time.  The ntpd service is a pretty advanced thing – it can do basic “set your workstation’s time” type of tasks, but it can also do things like talking to atomic clocks, providing time service to other machines via multicast or broadcast, and doing some pretty sophisticated network time synchronization which tries to avoid one or two bad network server clocks from impacting your local time. It also allows for authentication, which is a hard requirement in some environments.  For instane, PCI, the standard for processing credit cards, says, “time data must be protected.”  This is section 10.4.2 of the PCI-DSS, which while not explicitly requiring authentication, is clearly not a bad thing to have authentication.

I love ntpd.

The systemd people on the other hand, apparently hate it.  They went the same direction as some other popular mass-market operating systems and decided NTP is too complex to implement.  So they implemented SNTP (Simple NTP) only, and only in client mode. So it doesn’t function as a server. It doesn’t do authentication. It doesn’t track jitter and delay over time. It doesn’t try to make time jumps only in a forward direction.  It doesn’t do any number of other things to keep your time accurate.

Sure, it was easier for the systemd people’s world view. When a new network interface comes up, this service tries to fetch the proper time based on that network interface’s configuration. That’s cool – but the same thing can be done with NTP fairly easily. And there is a place for SNTP – embedded systems with limited resources. Not on computers with enough processing power to run, say, Unity (Ubuntu’s default GUI).

So here’s how to do replace it with real ntpd:

First, remove the systemd-timesyncd.service startup script:

rm /etc/systemd/system/systemd-timesyncd.service

Next, create /lib/systemd/system/ntp.service with the following contents:

[Unit]
Description=NTP
After=network.target auditd.service

[Service]
EnvironmentFile=-/etc/default/ntp
ExecStart=/usr/sbin/ntpd -n $NTPD_OPTS
ExecReload=/bin/kill -HUP $MAINPID
KillMode=process
Restart=on-failure

[Install]
WantedBy=multi-user.target
Alias=ntpd.service

Then link this to /etc/systemd/system/ntp.service:

ln -s /lib/systemd/system/ntp.service /etc/systemd/system/ntp.service

Then restart systemd:

systemctl daemon-reload

Now you can start NTP normally:

systemctl start ntp

Now you have workable NTP!

How Much Power Does a Raspberry Pi Draw?

There’s been an interesting, albeit somewhat off-topic, discussion on the NANOG mailing list about a theoretical project consisting of thousands of Raspberry Pis networked together, presumably doing some sort of clustered computing task.  I’m not sure this is actually efficient (I’m thinking a high end video card is probably a better use of money, power, and time), unless of course someone is doing it merely for the joy of doing it (in which case, I want to see pictures of it when done).

One of the obvious things you need to do is to power and cool such a beast – while one Pi puts out negligible heat, hundreds or thousands start to put out real heat.

So how much does a Pi draw?  I didn’t pull out my newer Pi’s but I pulled out an older one and hooked it to a lab power supply, as shown:

Setup of Pi for power usage  monitoring

Setup of Pi for power usage monitoring

I apologize that the board is a bit out of focus, but essentially I just powered the Pi via the expansion header pin 2 (5V) and the shield of the USB port (for 0V). It wasn’t hooked up to a USB supply.  I use a USB stick for my root file system (I just use the SD card for boot) because it performs significantly better than an SD card – I’ll write an article sometime on how to do this. However, that USB stick has some power draw, so a typical Raspberry Pi Model B will probably draw slightly less without that external storage. That said, I think I’m in the ballpark.

I read idle power (sitting at a Linux prompt) and power at load (CPU in a busy loop along with a “ping -f” from another host on the LAN towards the Pi, to exercise the CPU, the USB subsystem, and the network system). Power usage was roughly 2.0 watts at idle and 2.3 watts at load.  I imagine it would be a bit higher if I also exercised the GPU.

100 of these would draw roughly 2,300 watts at load – that converts to nearly 8,000 BTU/hr, which is a healthy sized space heater (in the USA, a space heater that plugs into a standard outlet will be about 1,800 watts) you are putting in your datacenter.  Put 1,000 into your datacenter, and you’re talking 13 space heaters!

So, it’s not a trivial amount of power at scale – nor is it trivial when you have to run them on battery.  I run one of these in a motorcycle trailer (don’t ask, I’ll explain some other time!) off of batteries, and it can drain the small trailer battery (designed primarily for emergency breakaway brake activation) fairly quick. The trailer battery is a 7AH 12V lead acid battery – assuming a nominal voltage of 12.5V, and a perfectly efficient 12V to 5V power converter, we’re in the neighborhood of 200ma of draw from the battery – or 35 hours of runtime. In reality, it’s hard to draw that last bit out of the battery, and while my converter is efficient, it’s not perfect, so I assume more like 24 hours. Of course the Pi in the trailer is running some other things (namely some wifi interfaces), so actual runtime is significantly less than that – but as you can see, you start to need real batteries to run this thing. It’s not the low-power (at least at idle) electronics that sit in our phones.

Regardless, whether you are running a rack full of Raspberry Pi computers or just one off of battery, heat and power are actual, real concerns anytime you are looking at doing something at scale or on batteries.