Archive | Tech RSS feed for this section

VirtualBox Guest Additions | SuSE Install

10 Jul

VirtualBox (now Sun xVM) Guest Additions are a set of drivers and utilities that are shipped as a subset of VirtualBox for the purpose of being installed inside a Guest Computer to improve its performance and cooperation with the rest of the Product.

If you are running openSUSE as a guest OS and want to install the VirtualBox Guest Additions then follow the procedure below:
Install GNU C Compiler, Make and Kernel Source
The VirtualBox Guest Additions require the GNU C compiler, make utility and the Kernel-Source packages to be installed if not previously installed.

Switch user to Root and install the packages

user@opensuse:~> su -

password:

opensuse:~# yast2 –install gcc gcc-c++ make kernel-source

This installs the GNU C, C++ compilers, Kernel-Source package and the make utility.

Now, from the host OS, on the Guest OS Virtualbox Devices menu, click “Install Guest Additions…” this mounts a virtual CD volume on the openSUSE guest OS under

/media/cdrom/VBOXADDITIONS_<version>

here it is

/media/cdrom/VBOXADDITIONS_1.6.2_31466

Change directory to that window and run the install script

opensuse:~# cd /media/cdrom/VBOXADDITIONS_1.6.2_31466/

opensuse:~# ./VBoxLinuxAdditions.run all

This should install the VirtualBox Guest Additions. Now restart the openSUSE guest OS for the additions to take effect. The Guest Additions improve guest performance and user experience including display settings etc.

nmap scan via TOR | hidemyip

9 Mar
Description

This tutorial shows how to configure the tools to do a Nmap portscan through the Tor network. This technique can be used in the shape of a pentest but it can also be used by attackers. Please be careful of the type of nmap scans you do as some options send your ip. You can add an entry to iptables to drop all outbound traffic to the destination for a particular scan. See further on for a how to.

Pre-requisites

First ensure you have installed necessary tools:

Configuration

In the following example, we do a Nmap portscan with tortunnel via proxychains. The reason why we need tortunnel is that it enables to scan faster. Indeed, by default, Tor uses a minimum of 3 hops. Thanks to tortunnel, we directly use a final exit node, which makes the scan much faster.

First install tor and configure it and install proxychains:

$ sudo apt-get install tor tor-geoipdb proxychains 
$ sudo service tor status 
tor is running $ sudo vi /etc/tor/torrc
# add the line below to allow local ip range to use tor proxy # SocksPolicy accept 10.1.1.0/24

Also install tortunnel:

$ sudo apt-get install libboost-system1.40-dev libssl-dev
$ cd /data/src/
$ wget http://www.thoughtcrime.org/software/tortunnel/tortunnel-0.2.tar.gz
$ tar xvzf tortunnel-0.2.tar.gz
$ cd tortunnel-0.2/
$ ./configure
$ make
$ sudo make install

Then configure proxychains to work with tortunnel. Edit the configuration file:

$ sudo vim /etc/proxychains.conf

And modify it as follows:

[ProxyList]
# add proxy here ...
# meanwile
# defaults set to "tor"
#socks4         127.0.0.1 9050
socks5 127.0.0.1 9050
Find an exit node and start torproxy

We then have to find an exit node that is stable, fast and valid. Most tor exit nodes do not support nmap scanning. You could use this :

$ curl http://128.31.0.34:9031/tor/status/all | grep --before-context=1 'Exit Fast Running V2Dir Valid' | awk '{ print $7 }' | sed '/^$/d'

to return a list of exit nodes that support nmap scanning.

Then start torproxy with the found exit node using the -n switch and bind to local port 9900 :

$ torproxy -n 178.73.***.** -p 9900
torproxy 0.3 by Moxie Marlinspike.
Retrieving directory listing...
Connecting to exit node: 178.73.*.*:9001
SSL Connection to node complete.  Setting up circuit.
Connected to Exit Node.  SOCKS proxy ready on 9900.
Start scan
Ssh-img013.png
Warning -> Beware of the parameters you use for the scan since some of them will disclose your IP address. More information below.

For our scan, we use Nmap with following arguments:

  • -Pn: to skip the host discovery (since it sends ICMP address, it would disclose our IP address)
  • -sT: full Connect() scan to ensure that all packets use the Tor network.

To ensure that our IP address won’t be disclosed to the target, you can add following rule to your firewall:

$ sudo iptables -A OUTPUT --dest <target> -j DROP

Now, run Nmap ad follows:

$ proxychains nmap -Pn -sT -p 80,443,21,22,23 80.14.163.161
ProxyChains-3.1 (http://proxychains.sf.net)

Starting Nmap 5.36TEST4 ( http://nmap.org ) at 2011-02-09 22:40 CET
|S-chain|-<>-127.0.0.1:5060-<><>-80.14.163.161:23-<--timeout
|S-chain|-<>-127.0.0.1:5060-<><>-80.14.163.161:22-<--timeout
|S-chain|-<>-127.0.0.1:5060-<><>-80.14.163.161:443-<--timeout
|S-chain|-<>-127.0.0.1:5060-<><>-80.14.163.161:80-<><>-OK
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
RTTVAR has grown to over 2.3 seconds, decreasing to 2.0
|S-chain|-<>-127.0.0.1:5060-<><>-80.14.163.161:21-<--timeout
Nmap scan report for LMontsouris-156-25-20-161.w80-14.abo.wanadoo.fr (80.14.163.161)
Host is up (13s latency).
PORT    STATE  SERVICE
21/tcp  closed ftp
22/tcp  closed ssh
23/tcp  closed telnet
80/tcp  open   http
443/tcp closed https

Nmap done: 1 IP address (1 host up) scanned in 60.86 seconds
Nmap results and tcpdump traces

Nmap results – without tor

$ nmap -Pn -sT 74.50.**.***

Starting Nmap 5.36TEST4 ( http://nmap.org ) at 2011-02-11 05:21 CET
Nmap scan report for 74.50.**.***
Host is up (0.16s latency).
Not shown: 992 closed ports
PORT      STATE    SERVICE
22/tcp    open     ssh
80/tcp    open     http
135/tcp   filtered msrpc
139/tcp   filtered netbios-ssn
443/tcp   open     https
445/tcp   filtered microsoft-ds
10000/tcp open     snet-sensor-mgmt
20000/tcp open     dnp

Nmap done: 1 IP address (1 host up) scanned in 23.38 seconds

tcpdump traces – without tor
Our IP address is disclosed, as shown on the following extract:

$ tcpdump -nS -c 10 -r scan-without-tor.cap "host 80.14.163.161"
reading from file scan-without-tor.cap, link-type EN10MB (Ethernet)
05:21:58.052164 IP 80.14.163.161.51027 > 74.50.**.***.21: Flags [S], seq 3307142116, win 5840, options [mss 1416,sackOK,TS val 148568 ecr 0,nop,wscale 6], length 0
05:21:58.052249 IP 74.50.**.***.21 > 80.14.163.161.51027: Flags [R.], seq 0, ack 3307142117, win 0, length 0
05:21:58.053041 IP 80.14.163.161.46436 > 74.50.**.***.3389: Flags [S], seq 3300984040, win 5840, options [mss 1416,sackOK,TS val 148568 ecr 0,nop,wscale 6], length 0
05:21:58.053058 IP 74.50.**.***.3389 > 80.14.163.161.46436: Flags [R.], seq 0, ack 3300984041, win 0, length 0
05:21:58.054538 IP 80.14.163.161.46034 > 74.50.**.***.80: Flags [S], seq 3299162143, win 5840, options [mss 1416,sackOK,TS val 148568 ecr 0,nop,wscale 6], length 0
05:21:58.054567 IP 74.50.**.***.80 > 80.14.163.161.46034: Flags [S.], seq 2576119236, ack 3299162144, win 5792, options [mss 1460,sackOK,TS val 2639903416 ecr 148568,nop,wscale 5], length 0
05:21:58.055538 IP 80.14.163.161.60357 > 74.50.**.***.8080: Flags [S], seq 3303516262, win 5840, options [mss 1416,sackOK,TS val 148568 ecr 0,nop,wscale 6], length 0
05:21:58.055552 IP 74.50.**.***.8080 > 80.14.163.161.60357: Flags [R.], seq 0, ack 3303516263, win 0, length 0
05:21:58.057287 IP 80.14.163.161.43407 > 74.50.**.***.22: Flags [S], seq 3301543264, win 5840, options [mss 1416,sackOK,TS val 148568 ecr 0,nop,wscale 6], length 0
05:21:58.057303 IP 74.50.**.***.22 > 80.14.163.161.43407: Flags [S.], seq 2572644408, ack 3301543265, win 5792, options [mss 1460,sackOK,TS val 2639903416 ecr 148568,nop,wscale 5], length 0

Nmap results – with tor

$ proxychains nmap -Pn -sT 74.50.**.***
(...TRUNCATED...)
Nmap scan report for 74.50.**.***
Host is up (0.35s latency).
Not shown: 995 closed ports
PORT      STATE SERVICE
22/tcp    open  ssh
80/tcp    open  http
443/tcp   open  https
10000/tcp open  snet-sensor-mgmt
20000/tcp open  dnp

Nmap done: 1 IP address (1 host up) scanned in 420.35 seconds

tcpdump traces – with tor
Our IP address is not disclosed, as shown on the following extract:

$ tcpdump -nS -c 10 -r scan-with-tor.cap "host 80.14.163.161"
reading from file scan-with-tor.cap, link-type EN10MB (Ethernet)
Conclusions

The results of the scans have shown that Tor enables to realize a Nmap portscan while not disclosing our IP address. Nevertheless, some limitations:

  • Our scan must use the full Connect() handshake
  • It is much slower than a normal scan (420 seconds with Tor against 23 seconds without using Tor), although we only used one exit node.
  • The anonymity of the second scan remains relative. Indeed, since we only use one node, this latest could be able to disclose our identity.

Shoutout to http://www.aldeid.com/ for this post

The Command Line Challenge

7 Mar

When I started using Linux I avoided the command line as much as possible. Then I started realizing that the command line is in fact very useful. Then I started digging in what you can actually do on the command line and I never stopped learning ever since.But I had a problem. I found it difficult to learn commands when you actually have GUI applications that replace them. It’s hard to get into the environment and become proficient if you only do some tasks on the command line. Back then you had to use it for some things, but distros like Ubuntu have the unofficial goal of preventing the user to go to the command line. I knew that If I really wanted to master the art of the command line I would have to make it my only environment. So I created the Command Line Challenge.

The idea is simple: Use only the command line for a period of time. If you think of this like a game, the levels would be:

  • Easy: 1 day.
  • Medium: 1 week.
  • Hard: 1 month.
  • Ultimate Geek: 6 months.

I started with the easy level just to realize it’s possible to do it at least one week. In order to have a working command line environment for an every day use, you may have to install the following software.

Browsers

I used both lynx and elinks. lynx has more options and is more powerful in general, but elinks has a better rendering and looks. elinks is not able to log into Facebook (a feature rather than a bug maybe?)

Text Editing.

Vim. That’s pretty much everything you need. Actually if you use emacs with lots of plugins to do a lot of stuff you’re probably ready to take the challenge. I recommend Vim because I use it every day. If by using only a  text editor you’re able to learn to develop without an IDE you get bonus points.

Email.

If you’re not using mutt right now then you’re missing a lot. mutt if fast, highly configurable and runs on our command line. There’s a mutt challenge too, that challenge is about that if mutt is able to do everything that you can do in Gmail, but that’s a topic for another post. I find mutt even more powerful. Here’s a guide to keep mutt synchronized with your gmail account.

Music

Frankly, I’m surprised that there are plenty of options to listen music on the console. I guess sysadmins love music too. My favorite choice is cmus. It has vim-like key bindings so it just feels natural if you’re used to mutt or vim. There are plenty of other options likemoc or mp3blaster but if you live the vimian way of life like me stick with cmus. You can also use mpd a nice daemon that plays music, specially useful if you want a music streaming solution. You can control it via vimpc

Chat

Laughing in front of a black screen because somebody told you a joke make the people around you think you’re some kind of a psycho but chatting is well supported in our powerful consoles. If you can use irssi to chat in irc channels, but that’s not all, you can download bitlbee to tunnel different IM protocols to irc. So you can have all your conversations centralized in an irc way. If you don’t like that approach you can use finchan ncurses version of the popular pidign.

Pictures.

Yes. You can see pictures on the command line without a graphical interface. How? Directly from caca labs, comes libcaca! A graphics library that outputs text instead of pixels, so that it can work on older video cards or text terminals. Be sure to check in your distribution because in Arch the package is called libcaca but all the binaries you need to see pictures (cacaview) come in that package too.

Videos.

Videos are just pictures passing by really fast, so videos are also possible. For that you’ll need the fantastic mplayer or vlc. You need to specify to use the caca driver with mplayer like this:

mplayer -vo caca video

With vlc you can use the nvlc to use vlc in a nice ncurses interface. What’s the quality of these videos? Well you can’t ask much, but for anime or cartoons the videos are actually fairly good.

File manager.

Just because you’re on the command line that doesn’t have to stop you from using a file manager. Lots of people use midnight commander even in a graphical environment.  I prefer to use a more vim-friendly approach. I like ranger because I already know all the key bindings that I need. I like it so much it’s my default file manager(shame on you nautilus)

Tmux

Tmux is terminal multiplexer. What does that mean? In simple terms is like a window manager for your terminal. You can have tabs, split windows and a nice status bar among other things. My life is not the same after I met tmux. There’s an excellent tutorial and a book about how can you improve your productivity with tmux.

These are just options so you can dive into the command line directly without being a terrible painful experience. The truly art of the command line is to learn the bash, how to write scripts to avoid repetition and more important, to understand that in UNIX a word is worth more than one hundred clicks. Have a cheatsheet with you with all the basic commands and remember that man is your friend.

I recommend you should go and have a look at Matt Might’s blog. He posted some really interesting articles about what you can do with a UNIX command line. I speciallyrecommend this one.

So, challenge accepted?

 Update: Also I you use twitter I recommend you bti and tyrs. If you think you need to learn the basics before diving in I recommend linuxcommandline.org I believe there’s a new book about it.

OpenSSH (server and client) | A complete guide | UBUNTU

20 Feb

SSH, or Secure Shell, is a popular network protocol that allows for the exchange of data using cryptography for additional security. It is most commonly used to log into a remote machine, such as a server, and execute commands via the command line. SSH operates on port 22 on the server by default. So, how is this useful to you, someone who I assume is not familiar with SSH? In the case that you may own a VPS or dedicated server, it will allow you to possess a greater control over your account, some highlights including the ability to modify configuration files which may not be available to you otherwise such as httpd.conf – this customization could allow for better optimization, increased performance, and as a result, a better bang for your buck.

So, what will you get out of this tutorial? You will learn how to connect to your server using PuTTY, a great and free SSH client, and will be introduced to some basic commands and various other useful commands. So, let us begin.

To start things off, you’ll need to use/download an SSH client. If you are using a *Nix system, chances are you can launch an SSH Session from the terminal without installing any software. Windows, however, requires a client. I’d like to recommend PuTTY, although you could try a variety of clients, as found at
http://en.wikipedia.org/wiki/Comparison_of_SSH_clients
. But, for the purpose of this guide, you can download PuTTY at
http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html
. The windows executable installer should be good enough for any Windows user.

Once downloaded, which shouldn’t take too long by the way as the installer is under 2MB, run it and install the client. The default values for the installer should be good enough, although you can change things if needed.

So now that PuTTY has been installed, start the client up by using the created shortcuts or going to Start->All Programs->PuTTY->PuTTY if installed with default settings. On start, the PuTTY configuration window will appear.

Here you’ll need to specify the IP address of your server (ex: 192.168.100.1) You can also specify a port other than the 22 if need be, although most servers listen to it by default. After specifying these things, you may want to save the information for quicker access by clicking on Save, under “Load, save or delete a stored session”. And now that you’ve done this, you can click on open to connect. At this point, a warning may come up saying that the server’s host key is not cached in the registry. In most cases, simply press Yes to trust the host and continue.

Here it is evident that you’ll need to specify a user to log in as. In many cases you’ll be able to use the same user information as those made available to you for logging into your control panels such as plesk or cpanel. After typing in your username, press enter. It will then ask for a password. At this point, you should note that the password will not be displayed as you type. Nope, not even the usual *** asterisks. So, just input your password carefully as to avoid any typos and press enter to login.

Now you’re in. Congratulations, you just opened up an SSH connection! From here on in you’ll be able to input various commands using your keyboard. In order to input a command, simply type it and press enter. You should also note that the standard right click function has been replaced by the paste function, and so should be used if you want to copy-paste. When done using PuTTY, simply enter exit to log out and exit the client.

Installing an OpenSSH Server

Type the following two command to install both ssh client and server on an Ubuntu Server Installation :

# sudo apt-get install openssh-server openssh-client

This will install the OpenSSH server (daemon) and start it on port 22. The SSH Server has a configuration file located in /etc/ssh/sshd_config. There is a heap of config changes that can be applied by changing parameters in this config file. The main ones that i usually use are :

Port :- default listening port for ssh daemon

PermitRootLogin :- allow root to login via ssh (disabled is recommended)

What about configuring the OpenSSH Server (SSHD) daemon?

Now that we’ve install the SSH Server and had a look at the config file we can test it from your home computer or from same system with the command:

# ssh localhost

OR

# ssh user@your-server-ip-address

Assuming that your server hostname is userver.mydomain.com and username is vivek, you need to type the following command:

# ssh vivek@userver.mydomain.com

To stop ssh server, enter:

# sudo /etc/init.d/ssh stop

To start sshs server, enter:

# sudo /etc/init.d/ssh start

To restart ssh server, enter:

# sudo /etc/init.d/ssh restart,

The log’s for checking ssh logins is located in /var/log/auth.log by default and should look like this :

Feb 20 10:02:49 bomber sshd[14113]: Accepted password for vivek from 1.155.23.110 port 39508 ssh2

Configuring SSH using ~/.ssh/config

For system and network administrators or other users who frequently deal with sessions on multiple machines, SSH ends up being one of the most oft-used Unix tools. SSH usually works so well that until you use it for something slightly more complex than starting a terminal session on a remote machine, you tend to use it fairly automatically. However, the ~/.ssh/config file bears mentioning for a few ways it can make using the ssh a client a little easier.

:- Abbreviating Hostnames

If you often have to SSH into a machine with a long host and/or network name, it can get irritating to type it every time. For example, consider the following command:

$ ssh web0911.colo.sta.solutionnetworkgroup.com

If you interact with the web0911 machine a lot, you could include a stanza like this in your ~/.ssh/config:

Host web0911
    HostName web0911.colo.sta.solutionnetworkgroup.com

This would allow you to just type the following for the same result:

$ ssh web0911

Of course, if you have root access on the system, you could also do this by adding the hostname to your/etc/hosts file, or by adding the domain to your /etc/resolv.conf to search it, but I prefer the above solution as it’s cleaner and doesn’t apply system-wide.

:- Fixing Alternative Ports

If any of the hosts with which you interact have SSH processes listening on alternative ports, it can be a pain to both remember the port number and to type it in every time:

# ssh webserver.example.com -p 5331

You can affix this port permanently into your .ssh/config file instead:

Host webserver.example.com
    Port 5331

This will allow you to leave out the port definition when you call ssh on that host:

# ssh webserver.example.com
:- Custom Identity Files

If you have a private/public key setup working between your client machine and the server, but for whatever reason you need to use a different key from your normal one, you’ll be using the -i flag to specify the key pair that should be used for the connection:

# ssh -i ~/.ssh/id_dsa.mail srv1.mail.example.com
# ssh -i ~/.ssh/id_dsa.mail srv2.mail.example.com

You can specify a fixed identity file in .ssh/config just for these hosts instead, using an asterisk to match everything in that domain:

Host *.mail.example.com
    IdentityFile ~/.ssh/id_dsa.mail

I need to do this for Mikrotik’s RouterOS connections, as my own private key structure is 2048-bit RSA which RouterOS doesn’t support, so I keep a DSA key as well just for that purpose.

:- Logging in as a different user

By default, if you omit a username, SSH assumes the username on the remote machine is the same as the local one, so for servers on which I’m called tom, I can just type:

tom@conan:$ ssh server.network

However, on some machines I might be known as a different username, and hence need to remember to connect with one of the following:

tom@conan:$ ssh -l tomryder server.anothernetwork
tom@conan:$ ssh tomryder@server.anothernetwork

If I always connect as the same user, it makes sense to put that into my .ssh/config instead, so I can leave it out of the command entirely:

Host server.anothernetwork
    User tomryder
:- SSH Proxies

If you have an SSH server that’s only accessible to you via an SSH session on an intermediate machine, which is a very common situation when dealing with remote networks using private RFC1918 addresses through network address translation, you can automate that in .ssh/config too. Say you can’t reach the hostnathost directly, but you can reach some other SSH server on the same private subnet that is publically accessible, publichost.example.com:

Host nathost
    ProxyCommand ssh -q -W %h:%p public.example.com

This will allow you to just type:

# ssh nathost
:- More Information

The above are the .ssh/config settings most useful to me, but there are plenty more available; check man ssh_config for a complete list.

SSH Tunnels

Quite apart from replacing Telnet and other insecure protocols as the primary means of choice for contacting and administrating services, the OpenSSH implementation of the SSH protocol has developed into a general-purpose toolbox for all kinds of well-secured communication, whether using both simple challenge-response authentication in the form of user and password logins, or for more complex public key authentication.

SSH is useful in a general sense for tunnelling pretty much any kind of TCP traffic, and doing so securely and with appropriate authentication. This can be used both for ad-hoc purposes such as talking to a process on a remote host that’s only listening locally or within a secured network, or for bypassing restrictive firewall rules, to more stable implementations such as setting up a persistent SSH tunnel between two machines to ensure sensitive traffic that might otherwise be sent in cleartext is not only encrypted but authenticated. I’ll discuss a couple of simple examples here, in addition to talking about the SSH escape sequences, about which I don’t seem to have seen very much information online.

:- SSH Tunnelling and Port Forwarding

Suppose you’re at work or on a client site and you need some information off a webserver on your network at home, perhaps a private wiki you run, or a bug tracker or version control repository. This being private information, and your HTTP daemon perhaps not the most secure in the world, the server only listens on its local address of 192.168.1.1, and HTTP traffic is not allowed through your firewall anyway. However, SSH traffic is, so all you need to do is set up a tunnel to port forward a local port on your client machine to a local port on the remote machine. Assuming your SSH-accessible firewall was listening on firewall.yourdomain.com, one possible syntax would be:

# ssh user@firewall.yourdomain.com -L5080:192.168.1.1:80

If you then pointed your browser to localhost:5080, your traffic would be transparently tunnelled to your webserver by your firewall, and you could act more or less as if you were actually at home on your office network with the webserver happily trusting all of your requests. This will work as long as the SSH session is open, and there are means to background it instead if you prefer — see man ssh and look for the -f and -Noptions. As you can see by the use of the 192.168.1.1 address here, this also works through NAT.

This can work in reverse, too; if you need to be able to access a service on your local network that might be behind a restrictive firewall from a remote machine, a perhaps less typical but still useful case, you could set up a tunnel to listen for SSH connections on the network you’re on from your remote firewall:

# ssh user@firewall.yourdomain.com -R5022:localhost:22 -f -N

As long as this TCP session stays active on the machine, you’ll be able to point an SSH client on your firewall to localhost on port 5022, and it will open an SSH session as normal:

# ssh localhost -p 5022

I have used this as an ad-hoc VPN back into a remote site when the established VPN system was being replaced, and it worked very well. With appropriate settings for sshd, you can even allow other machines on that network to use the forward through the firewall, by allowing GatewayPorts and providing abind_address to the SSH invocation. This is also in the manual.

SSH’s practicality and transparency in this regard has meant it’s quite typical for advanced or particularly cautious administrators to make the SSH daemon the only process on appropriate servers that listens on a network interface other than localhost, or as the only port left open on a private network firewall, since an available SSH service proffers full connectivity for any legitimate user with a basic knowledge of SSH tunnelling anyway. This has the added bonus of transparent encryption when working on any sort of insecure network. This would be a necessity, for example, if you needed to pass sensitive information to another network while on a public WiFi network at a café or library; it’s the same rationale for using HTTPS rather than HTTP wherever possible on public networks.

:- Escape Sequences

If you use these often, however, you’ll probably find it’s a bit inconvenient to be working on a remote machine through an SSH session, and then have to start a new SSH session or restart your current one just to forward a local port to some resource that you discovered you need on the remote machine. Fortunately, the OpenSSH client provides a shortcut in the form of its escape sequence, ~C.

Typed on its own at a fresh Bash prompt in an ssh session, before any other character has been inserted or deleted, this will drop you to an ssh> prompt. You can type ? and press Enter here to get a list of the commands available:

$ ~C
ssh> ?
Commands:
    -L[bind_address:]port:host:hostport  Request local forward
    -R[bind_address:]port:host:hostport  Request remote forward
    -D[bind_address:]port                Request dynamic forward
    -KR[bind_address:]port               Cancel remote forward

The syntax for the -L and -R commands is the same as when used as a parameter for SSH. So to return to our earlier example, if you had an established SSH session to the firewall of your local network, to forward a port you could drop to the ssh> prompt and type -L5080:localhost:80 to get the same port forward rule working.

SSH Keys (Passwordless login)

SSH keys allow authentication between two hosts without the need of a password. SSH key authentication uses two keys aprivate key and a public key.

To generate the keys, from a terminal prompt enter:

#   ssh-keygen -t dsa

This will generate the keys using a DSA authentication (use -t RSA for RSA authentication) identity of the user. During the process you will be prompted for a password. Simply hit Enter when prompted to create the key. If you specify a password during the key creation, it will always require this password during login and thus avert the purpose of “passwordless” loggin.

By default the public key is saved in the file ~/.ssh/id_dsa.pub, while ~/.ssh/id_dsa is the private key. Now copy the id_dsa.pub file to the remote host and appended it to ~/.ssh/authorized_keys:

#   cat id_dsa.pub >> .ssh/authorized_keys

OR
The ssh client has a built in tool to perform this task, called ssh-copy-id. The one drawback is that this tool only works if your default port for SSH has not changed.
#   ssh-copy-id -i id_dsa.pub user@remotehost

Finally, double check the permissions on the authorized_keys2 file, only the authenticated user should have read and write permissions. If the permissions are not correct change them by:

#   chmod 644 .ssh/authorized_keys

You should now be able to SSH to the host without being prompted for a password.

Block SSH Bruteforce Attempts

If you running your SSH Server on port 22 and you have the machine connected to the internet, within a couple minutes you will see a range of attempted bruteforce attempts on your server. This is considered normal as there are a wide range of attacks taking place at any given second on the internet. The below iptable rules, allow a maximum of 3 connections per minute from host. This is usually enough to tharwt the most basic of Bruteforce attacks and free up some of your bandwidth.

iptables -A INPUT -p tcp --dport 22 -m state --state NEW -m recent --set --name SSH -j ACCEPT
iptables -A INPUT -p tcp --dport 22 -m recent --update --seconds 60 --hitcount 4 --rttl --name SSH -j LOG --log-prefix "SSH_brute_fo ce "
iptables -A INPUT -p tcp --dport 22 -m recent --update --seconds 60 --hitcount 4 --rttl --name SSH -j DROP

Hope you enjoyed this guide to setting up and using SSH.

kippo | medium interaction honeypot | ubuntu

11 Jan

Kippo it is a great medium interaction SSH honeypot designed to log brute force attacks written in python. If you check https://bomber.dyndns.org/kippo_rpts/graphs/index.php  https://zabomber.dyndns.org/kippo_rpts/graphs/index.php, you will see my current honeypot {Jack} who has gathered some real interesting attacks in the past couple months.

In this brief report I will show my experience installing kippo on a ubuntu system.

Previously it’s necessary to install some dependences. It is highly recommended to utilize mysql for the kippo backend. By doing so, you can take advantage of some of the reporting i have configured and it makes it easy to report on all honeypots via one centralised db instance, should you choose to setup a few kippo honeypots and thus create a honeynet. Lastly, I recommend using the svn version vs wget as it is alot simpler to manage upgrades with svn.

$ sudo mkdir /opt/kippo 
$ sudo apt-get install subversion 
$ sudo apt-get install mysql-server 
$ sudo apt-get install python-dev openssl python-openssl python-pyasn1 python-twisted python-mysqldb

Next, we checkout kippo :

$ cd /opt/kippo/ 
$ sudo svn checkout http://kippo.googlecode.com/svn/trunk/ .

Next, we setup a non-root user (and mysql user) for the kippo instance :

$ sudo useradd -s /bin/false -d /home/kippo -m kippo

Next, we setup the mysql kippo database and create a user which we will use in the kippo.cfg file later. Please remember to change secret to what ever password you wish to, we will change this later :

$ mysql -u root -p 
Enter password: 

Welcome to the MySQL monitor. Commands end with ; or \g. 
Your MySQL connection id is 41 Server version: 5.1.41-3ubuntu12.10 (Ubuntu)
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. 

mysql> CREATE DATABASE kippo; 
Query OK, 
1 row affected (0.00 sec) 

mysql> GRANT ALL ON kippo.* to ‘kippo’@'localhost’ identified by ‘secret’; 
mysql>exit

We than have to import the kippo structure into the mysql database :

cd /opt/kippo/doc/sql/ 
mysql -ukippo -psecret kippo < mysql.sql

Now you can edit the config file kippo.cfg, you can see some options that you can change as you like.

Now in kippo.cfg file we need to uncomment the latest lines and put the correct cofiguration data,

[database_mysql] 
host = localhost 
database = kippo 
username = kippo 
password = secret

We also want to ensure that we get port 22 operational for kippo. Port 22 is the default SSH port and is utilized by 99% of brute force attacks. There are two ways in which to get kippo working on port 22.

One option is to port forward 22 on your firewall to the default port 2222 (located in kippo.cfg).

Or, like me, i want port 22 to be used which requires authbind to be setup :

$ sudo apt-get install authbind

Next with root:

$ touch /etc/authbind/byport/22 
$ chown kippo:kippo /etc/authbind/byport/22 
$ chmod 777 /etc/authbind/byport/22

Now with the kippo’s user change the start.sh from:

twistd -y kippo.tac -l log/kippo.log --pidfile kippo.pid

to:

authbind --deep twistd -y kippo.tac -l log/kippo.log --pidfile kippo.pid

Finished Kippo is running on port 22!!

One last final touch to give the setup application to kippo user :

$ sudo chown -R kippo:kippo /opt/kippo/

Now we can start our honeypot, very important, don’t use root account :

$ sudo su 
root@localhost: su kippo 
$ bash 
kippo@localhost:/opt/kippo/start.sh 
Starting kippo in background...Loading dblog engine: mysql

We check that the ssh honeypot it’s running in my case in port 22:

$ sudo netstat -atnp | grep 22 
tcp 0 0 0.0.0.0:22 0.0.0.0:* ESCUCHAR 3104/python 

If from another computer we try to lunch a nmap scan to 2222 port:

$ nmap -PN -sV -p 22 192.168.1.1

Great, we have our fake ssh server :

Nmap scan report for 192.168.1.1

Host is up (0.00046s latency).

PORT STATE SERVICE VERSION 2222/tcp open ssh OpenSSH 5.1p1 Debian 5 (protocol 2.0) Service Info: OS: Linux

Now we try to connect with a good pass, which is changeable via kippo.cfg (123456):

$ ssh -l root -p 22 192.168.1.1 Password: sales#

Ok, now in the honeypot machine we check our database with all the ssh connections attemps

$ mysql -u kippo -p 
> use kippo; 
> select * from auth; 

+—-+———————————-+———+———-+———-+—————-+
| id | session | success | username | password | timestamp |
+—-+———————————-+———+———-+———-+—————-+
| 1 | 6eb05042605211e0b00c000c29fc1cf3 | 0 | root | sdfasdf | 2011-04-06 13:33:19 |
| 2 | 6eb05042605211e0b00c000c29fc1cf3 | 0 | root | quit | 2011-04-06 13:34:42 |
| 3 | cfbead06605d11e09cf5000c29fc1cf3 | 0 | root | sdfasdfa | 2011-04-06 14:55:21 |
| 4 | cfbead06605d11e09cf5000c29fc1cf3 | 1 | root | 123456 | 2011-04-06 14:56:46 |
+—-+———————————-+———+———-+———-+—————-+

You can see all the attemps fails and successful. You can explore other interesting data:

> show tables; 

+—————–+
| Tables_in_kippo |
+—————–+
| auth |
| clients |
| input |
| sensors |
| sessions |
| ttylog |
+—————–+
Other interesting files in the kippo instalations,

dl/ – files downloaded with wget are stored here
log/kippo.log – log/debug output
log/tty/ – session logs
utils/playlog.py – utility to replay session logs
utils/createfs.py – used to create fs.pickle
fs.pickle – fake filesystem
honeyfs/ – file contents for the fake filesystem – feel free to copy a real system here

Lastly, there are some really cool 3rd party tools which i use to monitor the stats, display etc.

Check them out here :

Graphs : http://bruteforce.gr/kippo-graph
Ajaxterm : http://www.daveeddy.com/tutorials-scripts/ubuntu/ajaxterm-for-kippo-logs/

That’s it. You now have a medium interaction honeypot to capture attacks on your network.

Top 7 Linux Distros

20 Dec

There are various approaches to answering this question. The broad answer is: “any of them,” but that’s not very helpful if you’re just looking for a place to start.

The problem is, there never can be one best Linux distribution for everyone, because the needs of each user tend to be unique. Telling someone who’s looking for a good introductory distribution to try Gentoo, for instance, would be a mistake because for all its positive qualities, Gentoo is decidedly not a beginner’s distro.

All too often, Linux aficionados will tend to list the distributions they like as the best, which is fair, but if they are not aware of their audience, they could suggest something that does not meet that person’s needs. Finding a good Linux distribution is like finding a good match in an online dating service: good looks aren’t the only quality upon which to judge a Linux distro.

To help users discover the Linux distribution that’s best for them, this resource will definitively list the best candidates for the various types of Linux users to try. The use-case categories will be:

  • Best Desktop Distribution
  • Best Laptop Distribution
  • Best Enterprise Desktop
  • Best Enterprise Server
  • Best LiveCD
  • Best Security-Enhanced Distribution
  • Best Multimedia Distribution

Once you find the best Linux distribution for your needs, you can visit our Linux Migration Guides to assist you in installing and using the one you’d like to try.

Best Linux Desktop Distribution

There are a lot of Linux distributions that have the primary focus of becoming the next best desktop replacement for Windows or OS X. Of all the categories in this list, this is the most sought-after, and contentious, group of distros.

While it would be ideal to include many distributions on this list, the reality is that there really needs to be just one “best” Linux distribution. For early 2010, that distro has to be Canonical’s Ubuntu.

Ubuntu edges out its closest contenders, Fedora and openSUSE, because its development team is constantly focused on the end-user experience. Canonical and the Ubuntu community have spent a lot of time and resources on bringing ease-of-use tools to this distribution, particularly in the area of installing Ubuntu and installing applications within Ubuntu.

In addition, Ubuntu’s level of support for its desktop products is highly superior, which is important in this class of distributions since it is the most likely to contain users new to Linux. Both the official and unofficial Ubuntu documentation is robust and searchable, a big plus.

Best Linux Laptop Distribution

Laptop distributions almost fall into the same category as desktop users, but there are a number of key differences that make the criteria for evaluating a good laptop distribution important. Power management, docking tools, and wireless ease-of-use are critical to users on the go, as is having a distro that meets those needs.

Right now, the best laptop distribution is openSUSE, one of the lead contenders for the desktop honors. On the laptop, openSUSE shines with great connectivity tools, such as an easy-to-use networking toolset that not only handles WiFi connectivity, but also CDMA/cellular modem connections.

openSUSE also deals with docking stations for laptops very well, including dual-monitor management on the fly. Power management is very granular, which is great for detailing various power needs you might find yourself needing.

Best Linux Enterprise Desktop

This category is replete with great contenders as well, and it’s difficult to highlight just one. At the end of the day, though, the nod must be given to SUSE Linux Enterprise Desktop (SLED).

The reason is simple: while SLED and its primary competitor Red Hat Enterprise Linux Desktop are nearly identical in features and support performance, SLED has the advantage of the openSUSE Build Service, a free and open service that lets applications be built and delivered to SUSE Linux and openSUSE products (as well as Red Hat and CentOS).

This is a very important differentiator in enterprise desktop development, as it means that SLED has the current advantage of application building and deployment in the enterprise arena.

Best Linux Enterprise Server

Again, in this category it really comes down to two main contenders: Red Hat Enterprise Linux (RHEL) and SUSE Linux Enterprise Server (SLES). Given the pick for the Enterprise Desktop category, you might expect SLES to get the “best of” label here.

But, when all factors for the enterprise server are weighed, RHEL is still the king of this particular hill.

Red Hat edges out Novell with its server product, because RHEL users get a deeply mature distribution, and Red Hat’s support structure is second to none in the enterprise channels.

Best Linux LiveCD

As Linux technology improves, users can easily choose the LiveCD version of practically any of the Linux distros listed here to get the best LiveCD experience for their needs.

There is a specialized class of LiveCDs, however, that offers users utilities and tools for the specific purpose of repairing existing Linux and Windows installations. These distros are very useful to have regardless of what primary Linux distribution you like to use, because in a crisis they are invaluable to own.

In this class of distribution, KNOPPIX is hands-down the most complete and useful distro. Loaded on a CD or USB storage device, KNOPPIX will let you recover from nearly any rare Linux system crash as well as the much-less-rare Windows breakdowns.

Best Linux Security-Enhanced Distribution

Linux is inherently very secure compared to other operating systems, but there’s always room for improvement.

One of the challenges for locking down Linux is if you are not careful, you can take away too much functionality. Another challenge is that the best security-oriented Linux distro, SELinux, is historically known to be difficult to configure correctly. Still, if security out of the box is your priority, this is the best place to begin.

Another approach is the white hat method: using security and forensic tools to examine your existing installation, determine the holes, then lock your system down based on what gaps you find. If you have the time and inclination, this is a great way to do it, because this will get any existing system more secure right away.

For the white hat approach, the best distribution is BackTrack Linux, a dedicated penetration testing distro that will enable you to safely try to crack any system you are caretaking. Its toolset and strong community give it the advantage in this category.

Best Linux Multimedia Distribution

General Linux distributions have come a long way in terms of multimedia performance. Rare is the audio or video file that can’t be played on Linux. Music services such as Rhapsody and video sites like YouTube and Hulu are also standards-compliant and accessible to Linux users.

Still, for those users who are multimedia creators as well as consumers, there are Linux distributions that contain powerful tools for audio and video editing.

The best in this class is currently Ubuntu Studio. For audio, video, and graphic production, it contains a very complete set of tools, as well as format and codec support for a huge range of multimedia formats.

The applications contained in Ubuntu Studio are the same or similar to those used by major studios to create cutting edge work, so users are getting the best apps, coupled with the strong support ethos already found in the Ubuntu community.

In Linux there are as many opinions as there are lines of code. This represents one view of the best in Linux. What’s yours?

SSH | SCREEN | VNC

19 Dec
I am not where the work is

If you are anything like me, you have programs running on all kinds of different servers. You probably have a github account, a free Heroku instance, a work desktop, a couple website instances, and maybe even a home server. The best part is that using common Unix tools, you can connect to all of them from one place.

In this post, I will review some of the more interesting aspects of my workflow, covering the usage of SSH, screen, and VNC, including a guide for getting started with VNC. I’ll give some quick start information and quickly progress to advanced topics (like SSH pipes and auto-session-creation) that even experienced Unix users may not be aware of.

SSH to rule them all

By now you’ve almost certainly used SSH. It’s the easiest way to login to a remote machine and get instant command line access. It’s as easy as ssh user@example.com. You type in your password, and you’re in! But you might not know that it can be even easier (and more secure) than that.

Logging in via SSH without a password

We have only recently seen websites start to offer solutions for logging in without a password. SSH has provided a secure mechanism for this (based on public-key cryptography) since its inception. It’s pretty easy to setup once you know how it works.

1. Generate a public-private key pair

If you haven’t already, run ssh-keygen on your laptop, or whatever computer you will be doing your work from. You can just continue pressing Enter to accept the defaults, and you can leave the password blank (if you secure your laptop with encryption, a locking screensaver, and a strong password, your SSH key doesn’t require a password). This will generate a public key at ~/.ssh/id_rsa.pub and a private key at ~/.ssh/id_rsa. The private key should never leave your computer.

2. Copy the public key to each computer you connect to

For each computer that you connect to, run the following command (actually, see edit immediately below):

ssh user@example.com 'mkdir -p .ssh && cat >> .ssh/authorized_keys' < ~/.ssh/id_rsa.pub

This should be the last time you ever have to type your login password when connecting to the remote server. From now on, when you SSH to the remote server, its sshd service will encrypt some data using the public key that you appended to authorized_keys, and your local machine will be able to decode that challenge with your private key.

3. There is no step 3

It’s that easy! Don’t you wish you had set this up a long time ago?

SSH and pipes

Did you notice my SSH command up above? You can pipe data into and out of a remote process via SSH! This is amazingly useful. The SSH command above worked as follows:

  1. The contents of ~/.ssh/id_rsa.pub were piped into the SSH command
  2. SSH encrypted that data and sent it across the network to your remote machine
  3. The final argument to ssh ('mkdir -p .ssh && cat >> .ssh/authorized_keys') specified that instead of giving you an interactive login, you instead wanted to run a command.
  4. The first portion of that command (mkdir -p .ssh) created a .ssh directory on the remote machine if it did not already exist.
  5. The second portion (cat >> .ssh/authorized_keys) received the standard input via the SSH tunnel and appended it to the authorized_keys file on the remote machine.

This avoids the need to use SCP and login multiple times. SSH can do it all!
Here are some more examples to show you some of the neat things you can do with SSH pipe functionality.

Send the files at ~/src/ to example.com:~/src/ without rsync or scp
cd && tar czv src | ssh example.com 'tar xz'
Copy the remote website at example.com:public_html/example.com to ~/backup/example.com
mkdir -p ~/backup/
cd !$
ssh example.com 'cd public_html && tar cz example.com' | tar xzv
See if httpd is running on example.com
ssh example.com 'ps ax | grep [h]ttpd'
Other SSH tunnels

If piped data were the only thing that could be securely tunneled over SSH connections, that would still be useful. But SSH can also make remote ports seem local. Let’s say that you’re logged into example.com, and you’re editing a remote website that you’d like to test on port 8000. But you don’t want just anyone to be able to connect to example.com:8000, and besides, your firewall won’t allow it. What if you could get a connection to example.com, localhost:8000, but from your local computer and browser? Well, you can!

Create an SSH tunnel
ssh -NT -L 9000:localhost:8000 example.com

Using the -L flag, you can tell SSH to listen on a local port (9000), and to reroute all data sent and received on that port to example.com:8000. To any process listening on example.com:8000, it will look like it’s talking to a local process (and it is; an SSH process). So open a terminal and run the above command, and then fire up your browser locally and browse to localhost:9000. You will be whisked away to example.com:8000 as if you were working on it locally!

Let me clarify the argument to -L a bit more. The bit before the colon is the port on your local machine that you will connect to in order to be tunneled to the remote port. The part after the second colon is the port on the remote machine. The “localhost” bit is the remote machine you will be connected to, from the perspective of example.com. When you realize the ramifications of this, it becomes even more exciting! Perhaps you have a work computer to which you have SSH access, and you have a company intranet site at 192.168.10.10. Obviously, you can’t reach this from the outside. Using an SSH tunnel, however, you can!

ssh -NT -L 8080:192.168.10.10:80 work-account@work-computer.com

Now browse to localhost:8080 from your local machine, and smile as you can access your company intranet from home with your laptop’s browser, just as if you were on your work computer.

But my connection sucks, or, GNU screen

Have you ever started a long-running command, checked in on it periodically for a couple hours, and then watched horrified as your connection dropped and all the work was lost? Don’t let it happen again. Install GNU screen on your remote machine, and when you reconnect you can resume your work right where you left off (it may have even completed while you were away).

Now, instead of launching right into your work when you connect to your remote machine, first start up a screen session by running screen. From now on, all the work you are doing is going on inside screen. If your connection drops, you will be detached from the screen session, but it will continue running on the remote machine. You can reattach to it when you log back in by running screen -r. If you want to manually detach from the session but leave it running, type Ctrl-a, d from within the screen session.

Using screen

Screen is a complex program, and going into everything it can do would be a series of blog posts. Instead, check out this great screen quick reference guide. Some of screen’s more notable features are its ability to allow multiple terminal buffers in a single screen session and its scrollback buffer.

What happened to Control-a??

Screen intercepts Control-a to enable some pretty cool functionality. Unfortunately, you may be used to using Control-a for readline navigation. You can now do this by pressing Ctrl-a, a. Alternatively, you can remap it by invoking screen with the -e option. For example, running screen -e ^jj would cause Control-j to be intercepted by screen instead of Control-a. If you do this, just replace references to “C-a” in the aforementioned reference guide with whatever escape key you defined.

Shift-PageUp is broken

Like vim and less, screen uses the terminal window differently from most programs, controlling the entire window instead of just dumping text to standard output and standard error. Unfortunately, this breaks Shift-PageUp and Shift-PageDown in gnome-terminal. Fortunately, we can fix this by creating a ~/.screenrc file with the following line in it:

termcapinfo xterm ti@:te@

And while you’re mucking around in .screenrc, you might as well add an escape ^jj line to it, so that you can stop typing in -e ^jj every time you invoke screen.

Starting screen automatically

It’s pretty easy to forget to run screen after logging in. Personally, any time I am using SSH to login and work interactively, I want to be in a screen session. We can combine SSH’s ability to run a remote command upon login with screen’s ability to reconnect to detached sessions. Simply create an alias in your ~/.bashrc file:

alias sshwork='ssh -t work-username@my-work-computer.com "screen -dR"'

This will automatically fire up a screen session if there is not one running, and if there is one running, it will connect to it. Detaching from the screen session will also logout of the remote server.

Remote graphical work

Even in spite of SSH’s port forwarding capabilities, we still sometimes need to use graphical applications. If you have a fast connection or a simple GUI, passing the -Y flag to SSH could be enough to allow you run the application on your local desktop. Unfortunately, this often is a very poor user experience, and it does not work well with screen (a GUI application started in a screen session dies when you detach from the screen session).

The time-tested Unix solution to this problem is VNC. This is effectively a combination of screen and a graphical environment. Unfortunately, it has several drawbacks.

  • It can be tricky to setup reasonably
  • It is inherently insecure, with unencrypted data and a weak password feature
  • Its performance on a sub-optimal connection is less-than-stellar
  • It doesn’t transfer sounds over the network

I’m going to help you solve all of these problems, except the sound one. Who needs sounds, anyway?

VNC installation and setup

On the remote machine, you’ll need to install a VNC server and a decent lightweight window manager. I chose fluxbox and x11vnc:

sudo apt-get install x11vnc fluxbox

The programs that are started when you first start a VNC session are controlled by the ~/.vnc/xstartup file. I prefer something a bit better than the defaults, so mine looks like this:

#!/bin/sh
[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
netbeans &
gnome-terminal &
fluxbox &

Modify this to suit your own needs; I only invoke netbeans because it’s the only reason I ever use a remote GUI at all. NB: Although it may seem counterintuitive, it’s typically best to put the window manager command last.

You can start a VNC server with the following command (this isn’t the way you should normally do it! Read on…):

vncserver -geometry WIDTHxHEIGHT

where WIDTHxHEIGHT is your desired resolution. For me, it’s 1440×900. The first time you run this, it will ask you to create a password. We are going to ensure security through other means, so you can set it to whatever you want. Running the above command will give a message like “New ‘remote-machine:1 (username)’ desktop is remote-machine:1”. The “:1” is the display number. By adding 5900 to this, we can determine which port the VNC server is listening on. At this point, we can connect to remote-machine:5901 with a vncviewer and log in to the session we’ve created. We don’t want the entire Internet to be able to connect to our poorly-secured session, so let’s terminate that VNC server session:

vncserver -kill :1
Securing the VNC server

Remember how we tunneled ports with SSH? We can do the same thing with VNC data. First, we’ll invoke our VNC server slightly differently:

vncserver -localhost -geometry WIDTHxHEIGHT -SecurityTypes None

This causes the VNC server to only accept connections that originate on the local machine. It also indicates that we will not need a password to connect to our session; simply being logged in locally as the user who created the session is enough. You should now have a VNC server running on a remote machine listening on localhost:5901.

On your local machine, install a VNC viewer. I personally use gvncviewer, though I don’t particularly recommend it. Now, to connect to that remote port, you’ll need to start an SSH tunnel on your local machine:

ssh -NT -L 5901:localhost:5901 remote-machine.com

We can now run the VNC viewer on our local machine to connect via the tunnel to our VNC session:

gvncviewer :1
Speeding up VNC?

When starting an SSH tunnel, we can compress the data it sends by including the -C flag. Depending on your connection speed, it may be worth including the flag in your tunnel command. Experiment with this option and see what works best for you.

If you are really having problems, you might also want to check out the -deferUpdate option, which can delay how often display changes are sent to the client. For more information, man Xvnc.

Automatically starting and connecting to your VNC session

Putting everything together, we can create a script that does all of this for us. Simply set the GEOMETRY and SSH_ARGS variables appropriately (or modify it slightly to accept them as command line arguments).

#!/bin/bash
set -e

GEOMETRY=1440x900
SSH_ARGS='-p 22 username@remote-server.com'

# Get VNC display number. If there is not a VNC process running, start one
vnc_display="$(ssh $SSH_ARGS 'ps_text="$(ps x | grep X[v]nc | awk '"'"'{ print $6 }'"'"' | sed s/://)"; if [ "$ps_text" = "" ]; then vncserver -localhost -geometry '$GEOMETRY' -SecurityTypes none 2>&1 | grep New | sed '"'"'s/^.*:\([^:]*\)$/\1/'"'"'; else echo "$ps_text"; fi')"
port=`expr 5900 + $vnc_display`
ssh -NTC -L $port:localhost:$port $SSH_ARGS &
SSH_CMD=`echo $!`
sleep 3
gvncviewer :$vnc_display
kill $SSH_CMD

The vnc_display line is pretty gross, so I’ll give some explanation. It uses SSH to connect to the remote server and look for a running process named Xvnc: this is the running VNC server. If there’s one running we extract the display number. Otherwise, we start one up with the specified geometry and grab the display number from there. This all happens within a single command executed by ssh, and the resulting output is piped across the network back into our vnc_display variable.

Either way we get the value, we now know which port to connect to in order to reach our VNC server. We start our SSH tunnel and get the resulting PID. Finally, we invoke the vncviewer on that tunneled local port. When the VNC viewer exits, we automatically kill our SSH tunnel as well.

Concluding remarks

One of the best parts of Unix is that it was built to be run remotely from Day 1. Just about anything you can do on your local computer can also be done on a remote one. By leveraging tools like SSH, screen, and VNC, we can make remote work as easy and convenient as local work. I hope this post gave you some ideas for how you can create a productive workflow with these very common Unix tools.