Getting Hyperfido U2F keys working on Linux (Ubuntu or Mint)

I recently got a pair of cheap HyperFido keys. Some of them are as cheap as 5,5€ on Amazon but you can get others which maybe have better quality. Also, there are another keys which have support for bluethooh, NFC, and other features that can be very useful.

Anyway. That keys don’t work out of the box in Linux as they are not recognized as the proper hardware type by linux kernel, but the solution is very easy to implement.
I’m running right now 3 linux boxes with Linux Mint 18 and 19.

The solution, as I wrote, it’s really simple. Just go to /etc/udev/rules.d (of course, as root) and create a file named 70-u2f.rules.

Fill it with the following data:;

1
2
3
4
5
6
7
8
# this udev file should be used with udev 188 and newer
ACTION!="add|change", GOTO="u2f_end"


# HyperSecu HyperFIDO
KERNEL=="hidraw*", SUBSYSTEM=="hidraw", ATTRS{idVendor}=="096e|2ccf", ATTRS{idProduct}=="0880", TAG+="uaccess"

LABEL="u2f_end"

Some places write that only the 096e idVendor it’s needed. In my case the id is 2ccf (you can check it with lsusb command)

After creating the file, you can either reboot your computer or just execute sudo udevadm control --reload-rules

Some references:

https://gist.github.com/quantenschaum/1426c93fe10fb8e5ed040100d6adfc7b
https://github.com/Yubico/libu2f-host/blob/master/70-u2f.rules

Rename multiple files with a given pattern

You may need to rename several files in a folder following a pattern.
Say you have

1
2
3
4
5
6
7
8
9
10
jmiguel@monk /tmp/a  $ touch file.1.txt
jmiguel@monk /tmp/a $ touch file.2.txt
jmiguel@monk /tmp/a $ touch file.3.txt
jmiguel@monk /tmp/a $ ll
total 8
drwxr-xr-x 2 jmiguel jmiguel 4096 jun 19 10:54 .
drwxrwxrwt 28 root root 4096 jun 19 10:54 ..
-rw-r--r-- 1 jmiguel jmiguel 0 jun 19 10:54 file.1.txt
-rw-r--r-- 1 jmiguel jmiguel 0 jun 19 10:54 file.2.txt
-rw-r--r-- 1 jmiguel jmiguel 0 jun 19 10:54 file.3.txt

And you want to get

1
2
3
-rw-r--r--  1 jmiguel jmiguel    0 jun 19 10:54 newname.1.txt
-rw-r--r-- 1 jmiguel jmiguel 0 jun 19 10:54 newname.2.txt
-rw-r--r-- 1 jmiguel jmiguel 0 jun 19 10:54 newname.3.txt

You can do a loop, string substitution… a mess. Or you can use the rename command in this way:

rename 's/file/ppillo/g' *

Note than rename is a shell script made with perl, so maybe it’s not available on your operating system (mint and ubuntu have it) but if you have perl you can just copy and paste de following code to create your own rename version.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
#

use strict;
use File::Rename ();
use Pod::Usage;

main() unless caller;

sub main {
my $options = File::Rename::Options::GetOptions
or pod2usage;

mod_version() if $options->{show_version};
pod2usage( -verbose => 2 ) if $options->{show_manual};
pod2usage( -exitval => 1 ) if $options->{show_help};

@ARGV = map {glob} @ARGV if $^O =~ m{Win}msx;

File::Rename::rename(\@ARGV, $options);
}

sub mod_version {
print __FILE__ .
' using File::Rename version '.
$File::Rename::VERSION ."\n\n";
exit 0
}

1;

Copy thousand of files over ssh

So… you have to copy several thousand of files and folders from one server to another. And you’re short of disk space, as usual, and you want to be as efficient as possible.

You can use scp -r files user@server:/... but if you do it this way you’ll notice it takes forever. This is because ssh/scp open and close a the stream for every file. So you try to zip/tar in the from side to copy just one file, and then uncompress on the target side after copy. But this way you need roughtly twice as space in both sides. So, what can you do?.

ssh streams to the rescue

Maybe you know you can execute commands over ssh. If you do ssh user@host ls what you get it’s the ls on the remote side. Also, you also know there is a underlay stream which ssh uses, for example, in ssh tunnels.

So, how can you solve the original problem with this tool?. Easy: create a tar file (maybe ever compressed) on the fly, pipe it through ssh and un-tar it on the other side. It’s MAGIC! :-)

Copy from out local host to a remote:

1
tar -czf - [files] | ssh user@remoteHost "tar -xzvf - -C /remote/desired/folder"

Read it: create (c) as usual a tar file, compressed (z), and the file it’s the standard output (-). Pipe it thru ssh to remoteHost, and there, execute a tar extract (x), also with compression (z), verbose to see what’s going on (v) with the standard input (-) comming from ssh stream. Change (-C) to the desired remote folder before starting the process on the remote side.

This is, AFAIK, the best way to do the jobs. Just one step, super-fast, using compression if needed… what else?

Of course, you can also do it reverse, from server to your host:

1
ssh user@remoteHost "tar -cvf - /your/remote/folder" | tar -xvf -

You may even create a local tar file from remote files using this way:

1
ssh user@remoteHost "tar -cvf - /your/remote/folder" > local.file.tgz

Cool, isn’t it?

Setting default Linux kernel

Sometimes we want to boot Linux with a older kernel than default. If it’s just one time, it’s a piece of cake to select it on the boot menu.

Things starts to get a bit worse if you need to boot everytime with a previous version because (as it’s my case right now) you suspect there’s something that could affect performance or stability but you’re not sure.

The best suited for my needs it’s telling grub2 to remember the last kernel used. To do this you have to follow this steps:

  • Keep a copy of grub file. Just in case…
1
sudo cp /etc/default/grub /etc/default/grub.bak
  • Edit it with your favourite editor: vi, emacs, nano, joe …
1
sudo joe /etc/default/grub

and change the default behaviour GRUB_DEFAULT=0 which cause grub to choose the latest kernel with:

1
2
GRUB_DEFAULT=saved
GRUB_SAVEDEFAULT=true
  • Update grub with sudo update-grub

Done!

Changing hostname on Ubuntu 18.04 Server

Coming from previous versions of Ubuntu, I’m getting some issues with the new configuration system on Ubuntu 18.04. Not sure if some of them are welcomed (to me) but… they are there so we have no option.

Few moments ago, I’ve installed a new server with Ubuntu 18 and, thinking in future, I’ve created a blueprint on VMWare I could reuse for new servers. So I’ve create a new server with basic funcionalities, upgrade it and then convert to blueprint. Obviously, I gave it a nonsense name as ubuntu18

After that, I created a virtual machine from blueprint and, you guess it, I tried to change the name.

On previous versiones, I just needed to change it on /etc/hosts and /etc/hostname . That’s all. But now we need an aditional step. You need to change on /etc/cloud/cloud.cfg and tell you don’t want to upgrade the hostname (I could find where it finds the original name) on every reboot. The parameter is preserve_hostname and you need to change the default false value to true

So all steps:

  • Edit /etc/cloud/cloud.cfg and change preserve_hostname: false to preserve_hostname: true
  • Edit /etc/hostname and /etc/hosts . On hostname file, just put your new name. On /etc/hosts, add a line like 127.0.1.1 myHostName and x.x.x.x myHostName where x.x.x.x it’s your real IP name (if static)

That’s all!

Setting clock on ESXi console

Today I realized one of my ESXi servers had system clock way long in the future. It has no internet access so I cannot simply update it with any of the multiple time servers around the world. I thought it was just a matter of simply do a date -s … but on ESXi console things are not usually so simpler.

The right way to change time on console involves two commands, one for current date and another for setting the hardware clock so you get the right date again after reboot. Commands are:

1
2
3
4
5
6
7
8
9
10
11
12
13
# esxcli system time set -?
Usage: esxcli system time set [cmd options]

Description:
set Set the system clock time. Any missing parameters will default to the current time

Cmd options:
-d|--day=<long> Day
-H|--hour=<long> Hour
-m|--min=<long> Minute
-M|--month=<long> Month
-s|--sec=<long> Second
-y|--year=<long> Year

and

1
esxcli hardware clock set -?

with the same format as above.

Note the Any missing parameters will default to the current time, I assume seconds go to zero when setting minutes, but it’s not the case here.

So, if you want to set time to 09:10:00 , current date, the right commands are:

1
2

# esxcli system time set -H 09 -m 10 -s 00 ; esxcli hardware clock set -H 09 -m 10 -s 00

The fastest way to count columns in shell: AWK

Say you have (as I do) a file /tmp/myFile.txt containing a list of unrelated folders with their size, as follows:

1
2
3
4
5
6
7
7220    X03066659D/txt/20150109
1365 A30266659D/txt/20150112
9 X30626659D/txt/20150121
0 X30663659D/xml/20150102
5292 A30646659D/xml/20150105
10872 X30Q66659D/xml/20150107
7384 A30A66659D/xml/20150108

And you want to sum the first column of every line to know the total. AWK at rescue!

1
awk '{sum += $1} END {print sum} /tmp/myFile.txt'

If there were a discriminant you would like to use (imagine you only need those starting with A on second column) you can filter on AWK line (no need to grep)

1
awk '$2 ~ /^A/ {sum += $1} END {print sum}' /tmp/myFile.txt

Restore whole folders from Amazon S3 Glacier

Amazon Web Service it’s an incredible tool. And S3 it’s one of the best ways for storing your data and, of course, backups of your data.

If you use it for backup, you surely knows the Glacier feature: you can freeze your data getting a lower price… in money. Because the price you pay it’s that your data ain’t available immediatly when you need it, you have to wait to restore it before you can get it.

But a maybe worst issue is that you can’t restore a full folder. You only can restore files. So if you want to un-Glacier a folder you have to traverse all tree of folders, and check all individual files you want. One by one.

Althougt this sounds (and it is!) weird, it’s related with the way AWS S3 sees your files: there are no folders. A folder it’s just a visual representation of structure of files (using / as separator). But for S3 everything is a file.

Said that: I needed to restore a full directory with a lot of directories so doing the job one by one wasn’t an option. So I came to this solution on stackOverflow with credits to this other blog post.

I’ve modified the script just a bit because it didn’t work for me (maybe some parameters of aws-cli have changed). You’ll need the aws cli (Command Line Interface) to do this.

First of all, we get all Glaciered items on a bucket named here MYBUCKET, substitute it with your own name
(please note: I’ve updated it to allow spaces on file or folder names):

1
aws s3api list-objects-v2 --bucket MYBUCKET --query "Contents[?StorageClass=='GLACIER']" --output text | awk '{print substr($0, index($0, $2))}' | awk '{NF-=3};3' > glacier-restore.txt

This will get a list with all objects under Glacier rules on your bucket. Maybe you don’t want all of them so this is the moment to edit the glacier-restore.txt file in case you want to fine-tune the files you want to get.

Then, create (edit + chmod 755) the following script which takes the content of your file an do the restore:

1
2
3
4
5
6
7
8
#!/bin/sh

cat glacier-restore.txt | while read x
do
echo "Begin restoring $x"
aws s3api restore-object --restore-request Days=25 --bucket MYBUCKET --key "$x"
echo "Done restoring $x"
done

Remember: the process it’s not inmediate. With this command you instruct AWS to restore your files, but it will take between 3 and 5 hours to complete. You can do it faster (and more expensive) but if you need it so fast you’d maybe should consider standard storage.

Hope this helps!