How to run a sub-domain of CLUG.Org

Why?

Because DNS is a great enabler!

Let’s say you want to share some information with the world but you have a regular, dynamic xDSL Internet connection. You start up a web server, open port 80 on your router, find your IP address is 72.49.120.103, then call some friends to let them know that address. All is amazing and wonderful. Every so often though, your IP address will change, and then you become lost to the world. You need to find out what your IP address is, then call those you want to share with and tell them your new address, it’s tedious and you have to wait for them to find a pen.

Enter Dynamic DNS!

Dynamic DNS allows you to associate a host name with an IP address which changes, such as one assigned to a dial-up intenet connection or a cable modem.

Instead of 72.49.120.103, you can be amy.clug.org! Even I can remember that.

How?

Step zero is to send an e-mail to president@clug.org, requesting a sub-domain, and it must include a phone number, I will not set up someone until I’ve talked to them! You can call me if you want, I have a Cincinnati number, Six Zero Four-5916.

Step one is to set up a service on your server, it can be SSH, HTTP, FTP, FreeCiv or anything else you like, but not Telnet, telnet is bad. A note on security, if you aren’t sure of the security implications of the software you want to run, at a minimum, do a Google search like “Linux howto secure ipp” beforehand, and no, there isn’t a space between how and to. Figure out what port or ports your service runs on, you can look in /etc/services or the man page, or use sudo nmap -sS 192.168.1.2 (where that last part is the LAN address of your server). Make sure that you can get to that service from another machine on your local network and that it gives back sane responses.

sudo nmap -sS 192.168.1.2

Starting Nmap 5.21 ( http://nmap.org ) at 2013-10-27 11:03 EDT
Nmap scan report for 192.168.1.2
Host is up (0.0000090s latency).
Not shown: 996 closed ports
PORT STATE SERVICE
22/tcp open ssh
631/tcp open ipp
2222/tcp open unknown

Nmap done: 1 IP address (1 host up) scanned in 0.25 seconds

Step two is to set up your router, this is pretty much beyond the scope of this little howto, as there are a zillion different routers out there, and some of them can be a pain to set up in what the original builders think might be insecure (Apple Airport, I’m looking at you!). What you want to do is find an entry like “Port Forwarding” or “Game Access” in some cases, this is done by connecting to the built-in web-server that runs on the router itself, usually at http://192.168.1.1/ and looking through the menus you find there, after you change the password to something secure resembling ho3r0cqh@m – and no, that isn’t my password. In my case, I wanted to open access to SSH on the non-default port of 2222, so I forward port 2222 through 2222 to 192.168.1.2, port 2222. The port xx through yy is for a contiguous range of ports, the destination port is the lowest port in the range. Not all routers do this kind of range, but it is the most confusing of the ones I’ve found.

Step three is to figure out what your external IP address is. The script at the end of this article reads the address from my gateway router, however there are plenty of places on the Internet that can give you this information, for instance;

links -dump http://www.ipchicken.com/ | grep -oP "[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}"

and

links -dump http://www.cmyip.com/ | grep -oP "[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}"

In both of these, I’m using links, but you can use elinks, w3m, wget, lynx or anything else that returns a page on the command line. The pipe to grep takes the output before and returns only (o) the match for the perl (P) expression that matches an IPv4 address (Okay, it doesn’t, but it’s good enough).

My opinion is that reading the router is the better idea as it’s on the end of a very fast wire that doesn’t slow down my surfing speed.

Step four is to request a dynamic update of your hostname from http://freedns.afraid.org/dynamic/update.php with an argument of the SecretString that I provide you with. Each sub-domain has a unique SecretString, so these can be distributed easily, and used on a router if it runs dd-wrt or Tomato. As soon as you run;

SecretString="BgyVtp45gfnMrrd0n3D5GHns4b79saAKpTMAtv=="
wget -q --read-timeout=0.0 --waitretry=5 --tries=400 -O- \
http://freedns.afraid.org/dynamic/update.php?${SecretString}

your sub-domain should be active, a ping sent to clug.org should give my address, one sent to you.clug.org should give your ip address (They may or not succeed, but the addresses should be correct. Also, it doesn’t need to be run from your server, any machine that uses the same gateway router will work, though I can’t come up with a good reason to do this. One of the interesting things about this method is that it doesn’t need to be run as root, any user can run the script below.

You could just run the wget line above as a cron job, but please don’t, it puts an excessive load on the machines at Afraid.org and that irritates Joshua Anderson, the owner of Afraid.org.

Step five is to set up a cron job to do this work while you sleep,

crontab -l
# .---------------- minute (0 - 59)
# | .------------- hour (0 - 23)
# | | .---------- day of month (1 - 31)
# | | | .------- month (1 - 12) OR jan,feb,mar,apr ...
# | | | | .---- day of week (0-7) (Sun=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# | | | | |
# * * * * * command to be executed
*/10 * * * * ~/bin/Afraid.org.update.sh

The script looks up your external address, compares it to whatever it had been, and if it changes, requests an update. Save the script below as ~/bin/Afraid.org.update.sh and change the SecretString variable in line nine, replace the value shown with the one I give you. Don’t forget to make it executable!

#!/bin/sh
# Run me to set the external address up
#
# This script only tries to update if there is a change in our IP address
# or we loose the connection to the World Wide Web.

Logfile=${HOME}/.Afraid.org.log

SecretString="BgyVtp45gfnMrrd0n3D5GHns4b79saAKpTMAtv=="

# The lines below gets our IP address from the crappy little CBT Wireless
# router at home.
# They need to be modified if that router changes, or we use another service.
Current=`
links -source http://192.168.1.1/htmlV_Generic/home_Connect.asp \
| grep WanIPRoutingState_WanIPAddress \
| grep -oP "[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}"
`

# Present the date in an easily grepable fashion.
Date=`
date +"%Y-%m-%d+%H:%M"
`

# If the Logfile doesn't exist, create it with a known IP address.
if [ ! -f $Logfile ] ; then
echo "$Date 0.0.0.0" >> $Logfile
fi

# If the Logfile still does not exist, something is wrong, cry for help.
if [ ! -f $Logfile ] ; then
echo "Cannot create $Logfile - check directory permissions"
exit 1
fi

Last=`
cat $Logfile | tail -1 | cut -d " " -f 2
`

if [ "$Current" = "$Last" ] ; then # No update is required, silently exit.
exit 0
fi

echo "Update of external IP address needed."
echo "$Date $Current" >> $Logfile
# The line below is from afraid.org and is what actually sets the DNS entries
# for the domain.
wget -q --read-timeout=0.0 --waitretry=5 --tries=400 -O- \
http://freedns.afraid.org/dynamic/update.php?${SecretString}

Step six, why didn’t it work?

It probably did work, you just can’t see the forest through the trees. Say you are on a machine with IP of 192.168.1.100, and your server is at 192.168.1.2, with the router at 192.168.1.1, your external address is 40.30.20.10, and port 80 is forwarded to the server.

When I try to connect from my machine at my house, everything works properly, DNS resolves your external address and I see the web page on your machine because your router forwards my request to your server.

When you try to connect, you resolve your external address, send the request out your router, which doesn’t understand why an internal address is trying to connect to another internal address through the router, so it drops the packet.

Before you spend hours trying to figure out what is going on, call somebody and ask if they can see your page, if they can, you’re golden.

To fix things so they work properly inside as well, add your server to your /etc/hosts file ( %SystemRoot%\system32\drivers\etc\hosts on Windows, /lib/ndb/hosts on Plan 9), and everything is good, unless you are on a laptop. If you are using a laptop and take the laptop to a friends house, when you try to connect, you resolve 192.168.1.2 which won’t work. It wouldn’t be difficult to write a script that looks at the name of your access point and modifies the hosts file if you are home, but your access point would need a unique name.

The proper way to fix this is to run your own internal DNS server, either on your router or on your server. The advantage of the router is that it’s pretty simple and you don’t need to worry about it once everything is set up, the advantage of the server is that you can do more with it, but you need to do more with it.

If you run dd-wrt, you can fix this by enabling dnsmasq, then adding your hostname to the Additional DNSMasq Options

expand-hosts
address=/www/192.168.1.2
address=/mail/192.168.1.2
address=/amy.clug.org/192.168.1.2

Free DNS from Afraid.Org

Dr. Richard Stallman

Richard_Stallman_at_Pittsburgh_University
Richard Matthew Stallman (born March 16, 1953), often known by his initials, RMS, is an American software freedom activist and computer programmer. He campaigns for software to be distributed in a manner, such that a user receiving it, likewise receives with it the freedoms to use, study, distribute and modify that software: software that ensures these freedoms (on receipt) is termed free software. He is best known for launching the GNU Project, founding the Free Software Foundation, developing the GNU Compiler Collection and GNU Emacs, and writing the GNU General Public License.

Stallman launched the GNU Project in September 1983 to create a Unix-like computer operating system composed entirely of free software. With this, he also launched the free software movement. He has been the GNU project’s lead architect and organizer, and developed a number of pieces of widely used GNU software including, among others, the GNU Compiler Collection, the GNU Debugger and the GNU Emacs text editor. In October 1985 he founded the Free Software Foundation.

Stallman pioneered the concept of copyleft, which uses the principles of copyright law to preserve the right to use, modify and distribute free software, and is the main author of free software licenses which describe those terms, most notably the GNU General Public License (GPL), the most widely used free software license. In 1989 he co-founded the League for Programming Freedom. Since the mid-1990s, Stallman has spent most of his time advocating for free software, as well as campaigning against software patents, digital rights management, and other legal and technical systems which he sees as taking away users’ freedoms, including software license agreements, non-disclosure agreements, activation keys, dongles, copy restriction, proprietary formats and binary executables without source code.

He has received fourteen honorary doctorates and professorships for this work.

Wikipedia Link to Richard Stallman

Meeting-2013-08-24

There will be a meeting Saturday, August 24th, 2013, 10:00am, at the Pleasant Ridge Branch of the Cincinnati Public Library, located at 6233 Montgomery Road, Cincinnati, OH

Google Maps Goodness

The topic is setting up a web based camera server on the Raspberry PI, if there’s something specific you want to hear about, let me know, but leave out the Spam!
Steve Jones E-Mail

Thank you and hope to see you at the meeting!

Steve Jones

Raspberry Spi


# Insert an SD card into a Linux PC and;
sudo fdisk -l
#
# Disk /dev/sdb: 3965 MB, 3965190144 bytes
# 49 heads, 48 sectors/track, 3292 cylinders, total 7744512 sectors
# Units = sectors of 1 * 512 = 512 bytes
# Sector size (logical/physical): 512 bytes / 512 bytes
# I/O size (minimum/optimal): 512 bytes / 512 bytes
# Disk identifier: 0x00000000
#
# Device Boot Start End Blocks Id System
# /dev/sdb1 8192 7744511 3868160 b W95 FAT32

sudo umount /dev/sdb1

sudo dd if=archlinux-hf-2013-07-22.img of=/dev/sdb
# 1870+0 records in
# 1870+0 records out
# 1960837120 bytes (2.0 GB) copied, 203.495 s, 9.6 MB/s

sudo fdisk -l
#
# Disk /dev/sdb: 3965 MB, 3965190144 bytes
# 64 heads, 32 sectors/track, 3781 cylinders, total 7744512 sectors
# Units = sectors of 1 * 512 = 512 bytes
# Sector size (logical/physical): 512 bytes / 512 bytes
# I/O size (minimum/optimal): 512 bytes / 512 bytes
# Disk identifier: 0x00057540
#
# Device Boot Start End Blocks Id System
# /dev/sdb1 2048 186367 92160 c W95 FAT32 (LBA)
# /dev/sdb2 186368 3667967 1740800 5 Extended
# /dev/sdb5 188416 3667967 1739776 83 Linux

sudo gparted /dev/sdb
# Expand the sdb2 and sdb5 filesystems to use the whole SD card.

# Move the SD card to the 'pi and connect to a router that has internet access
# and another computer with ssh. The original passwd is 'root'

ssh root@192.168.2.22
# The authenticity of host '192.168.2.22 (192.168.2.22)' can't be established.
# ECDSA key fingerprint is 07:e6:10:f7:75:54:9a:58:af:98:97:e1:a8:f6:17:fb.
# Are you sure you want to continue connecting (yes/no)? yes
# Warning: Permanently added '192.168.2.22' (ECDSA) to the list of known hosts.
# root@192.168.2.22's password:
# X11 forwarding request failed on channel 0
# Last login: Fri Aug 2 00:16:57 2013 from 192.168.2.17
# [root@alarmpi ~]#

passwd
# Zaq12wsX
# Enter new UNIX password:
# Retype new UNIX password:
# passwd: password updated successfully

ping www.google.com
# PING www.google.com (74.125.225.177) 56(84) bytes of data.
# 64 bytes from den03s05-in-f17.1e100.net (74.125.225.177): icmp_seq=1 ttl=49 time=60.6 ms
# 64 bytes from den03s05-in-f17.1e100.net (74.125.225.177): icmp_seq=2 ttl=49 time=59.5 ms
# ^C
# --- www.google.com ping statistics ---
# 2 packets transmitted, 2 received, 0% packet loss, time 1001ms
# rtt min/avg/max/mdev = 59.554/60.077/60.601/0.578 ms

pacman -Syu
# :: Synchronizing package databases...
# core 42.5 KiB 299K/s 00:00 [#################################] 100%
# extra 536.2 KiB 632K/s 00:01 [#################################] 100%
# community 546.9 KiB 753K/s 00:01 [#################################] 100%
# alarm 7.1 KiB 77.2K/s 00:00 [#################################] 100%
# aur 19.1 KiB 407K/s 00:00 [#################################] 100%
# :: Starting full system upgrade...
# resolving dependencies...
# looking for inter-conflicts...
#
# Packages (8): cracklib-2.9.0-1 dhcpcd-6.0.4-1 glib2-2.36.3-3
# libgcrypt-1.5.3-1 libusbx-1.0.16-1 linux-firmware-20130728-1
# netctl-1.2-1 pacman-mirrorlist-20130725-1
#
# Total Download Size: 21.26 MiB
# Total Installed Size: 67.06 MiB
# Net Upgrade Size: 1.43 MiB
#
# :: Proceed with installation? [Y/n] y
# :: Retrieving packages ...
# cracklib-2.9.0-1-armv6h 240.8 KiB 350K/s 00:01 [######################] 100%
# dhcpcd-6.0.4-1-armv6h 88.5 KiB 226K/s 00:00 [######################] 100%
#
# --------- 8< ---------------------------- 8< -------------- # # (7/8) upgrading netctl [#####################################] 100% # (8/8) upgrading pacman-mirrorlist [#####################################] 100% pacman-key --init # gpg: /etc/pacman.d/gnupg/trustdb.gpg: trustdb created # gpg: no ultimately trusted keys found # gpg: Generating pacman keyring master key... # gpg: key 69F70C96 marked as ultimately trusted # gpg: Done # ==> Updating trust database...
# gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
# gpg: depth: 0 valid: 1 signed: 0 trust: 0-, 0q, 0n, 0m, 0f, 1u

reboot

pacman -S netctl
# warning: netctl-1.2-1 is up to date -- reinstalling
# resolving dependencies...
# looking for inter-conflicts...
#
# Packages (1): netctl-1.2-1
#
# Total Installed Size: 0.16 MiB
# Net Upgrade Size: 0.00 MiB
#
# :: Proceed with installation? [Y/n] y
# (1/1) checking keys in keyring [###############################] 100%
# (1/1) checking package integrity [###############################] 100%
# (1/1) loading package files [###############################] 100%
# (1/1) checking for file conflict [###############################] 100%
# (1/1) checking available space [###############################] 100%
# (1/1) reinstalling netctl

cd /etc/netctl/
# This is the directory for setting up networking in Arch

install -m640 examples/wireless-wpa wireless-home
# Install does the same thing as cp, but can alter the mode in the transfer.

vi wireless-home
# Set the ESSID and network password, then save the file.

netctl start wireless-home
# If you just get a prompt back, all is well!

# But, because we're paranoid, lets check.
ifconfig
# eth0: flags=4163 mtu 1500
# inet 10.42.0.43 netmask 255.255.255.0 broadcast 10.42.0.255
# ether b8:27:eb:9b:ed:bd txqueuelen 1000 (Ethernet)
# RX packets 1590 bytes 763883 (745.9 KiB)
# RX errors 0 dropped 0 overruns 0 frame 0
# TX packets 958 bytes 125982 (123.0 KiB)
# TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
#
# lo: flags=73
mtu 16436
# inet 127.0.0.1 netmask 255.0.0.0
# loop txqueuelen 0 (Local Loopback)
# RX packets 0 bytes 0 (0.0 B)
# RX errors 0 dropped 0 overruns 0 frame 0
# TX packets 0 bytes 0 (0.0 B)
# TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
#
# wlan0: flags=4163
mtu 1500
# inet 192.168.2.9 netmask 255.255.255.0 broadcast 192.168.2.255
# ether c8:3a:35:ca:41:a1 txqueuelen 1000 (Ethernet)
# RX packets 12 bytes 2010 (1.9 KiB)
# RX errors 0 dropped 0 overruns 0 frame 0
# TX packets 12 bytes 1768 (1.7 KiB)
# TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
#

netctl enable wireless-home
# This enables the connection after a reboot, kind of important. It also spits out
# the rather cryptic message below;
# ln -s '/etc/systemd/system/netctl@wireless\x2dhome.service' \
# '/etc/systemd/system/multi-user.target.wants/netctl@wireless\x2dhome.service'

reboot
# Verify that wireless comes up

ssh root@192.168.2.9

pacman -Syu
# :: Synchronizing package databases...
# core is up to date
# extra is up to date
# community is up to date
# alarm is up to date
# aur is up to date
# :: Starting full system upgrade...
# there is nothing to do

fdisk /dev/mmcblk0
# Welcome to fdisk (util-linux 2.23.2).
#
# Changes will remain in memory only, until you decide to write them.
# Be careful before using the write command.
#
#
# Command (m for help): p
#
# Disk /dev/mmcblk0: 7913 MB, 7913603072 bytes, 15456256 sectors
# Units = sectors of 1 * 512 = 512 bytes
# Sector size (logical/physical): 512 bytes / 512 bytes
# I/O size (minimum/optimal): 512 bytes / 512 bytes
# Disk label type: dos
# Disk identifier: 0x00057540
#
# Device Boot Start End Blocks Id System
# /dev/mmcblk0p1 2048 186367 92160 c W95 FAT32 (LBA)
# /dev/mmcblk0p2 186368 3667967 1740800 5 Extended
# /dev/mmcblk0p5 188416 3667967 1739776 83 Linux
#
# Command (m for help): n
# Partition type:
# p primary (1 primary, 1 extended, 2 free)
# l logical (numbered from 5)
# Select (default p): p
# Partition number (3,4, default 3): 3
# First sector (3667968-15456255, default 3667968):
# Using default value 3667968
# Last sector, +sectors or +size{K,M,G} (3667968-15456255, default 15456255):
# Using default value 15456255
# Partition 3 of type Linux and of size 5.6 GiB is set
#
# Command (m for help): w
# The partition table has been altered!
#
# Calling ioctl() to re-read partition table.
#
# WARNING: Re-reading the partition table failed with error 16: Device or resource busy.
# The kernel still uses the old table. The new table will be used at
# the next reboot or after you run partprobe(8) or kpartx(8)
# Syncing disks.

reboot
# Just for good measure

ssh root@192.168.2.9
# root@192.168.2.9's password:
# X11 forwarding request failed on channel 0
# Last login: Thu Jan 1 00:00:29 1970 from 192.168.2.17

fdisk -l
# Disk /dev/mmcblk0: 7913 MB, 7913603072 bytes, 15456256 sectors
# Units = sectors of 1 * 512 = 512 bytes
# Sector size (logical/physical): 512 bytes / 512 bytes
# I/O size (minimum/optimal): 512 bytes / 512 bytes
# Disk label type: dos
# Disk identifier: 0x00057540
#
# Device Boot Start End Blocks Id System
# /dev/mmcblk0p1 2048 186367 92160 c W95 FAT32 (LBA)
# /dev/mmcblk0p2 186368 3667967 1740800 5 Extended
# /dev/mmcblk0p3 3667968 15456255 5894144 83 Linux
# /dev/mmcblk0p5 188416 3667967 1739776 83 Linux

mkfs.ext2 /dev/mmcblk0p3
# mke2fs 1.42.8 (20-Jun-2013)
# Filesystem label=
# OS type: Linux
# Block size=4096 (log=2)
# Fragment size=4096 (log=2)
# Stride=0 blocks, Stripe width=0 blocks
# 368640 inodes, 1473536 blocks
# 73676 blocks (5.00%) reserved for the super user
# First data block=0
# Maximum filesystem blocks=1509949440
# 45 block groups
# 32768 blocks per group, 32768 fragments per group
# 8192 inodes per group
# Superblock backups stored on blocks:
# 32768, 98304, 163840, 229376, 294912, 819200, 884736
#
# Allocating group tables: done
# Writing inode tables: done
# Writing superblocks and filesystem accounting information: done

echo "/dev/mmcblk0p3 /home ext2 defaults 0 0" >> /etc/fstab
# Set up a directory to store files.

reboot
# Make sure the mount worked

ssh root@192.168.2.9

timedatectl set-timezone America/New_York

timedatectl status
# Local time: Thu 2013-08-08 02:16:42 EDT
# Universal time: Thu 2013-08-08 06:16:42 UTC
# Timezone: America/New_York (EDT, -0400)
# NTP enabled: yes
# NTP synchronized: no
# RTC in local TZ: no
# DST active: yes
# Last DST change: DST began at
# Sun 2013-03-10 01:59:59 EST
# Sun 2013-03-10 03:00:00 EDT
# Next DST change: DST ends (the clock jumps one hour backwards) at
# Sun 2013-11-03 01:59:59 EDT
# Sun 2013-11-03 01:00:00 EST

pacman -S sudo
# resolving dependencies...
# looking for inter-conflicts...
#
# Packages (1): sudo-1.8.7-1
#
# Total Download Size: 0.62 MiB
# Total Installed Size: 2.82 MiB
#
# :: Proceed with installation? [Y/n] y
# :: Retrieving packages ...
# sudo-1.8.7-1-armv6h 635.9 KiB 604K/s 00:01 [###############] 100%
# (1/1) checking keys in keyring [######################################] 100%
# (1/1) checking package integrity [######################################] 100%
# (1/1) loading package files [######################################] 100%
# (1/1) checking for file conflict [######################################] 100%
# (1/1) checking available space [######################################] 100%
# (1/1) installing sudo [######################################] 100%

visudo
# Append 'pi ALL=(ALL) ALL' to the end of the file.

pacman -S motion
# Time to install the actual software that makes this a web enabled web cam.
# resolving dependencies...
# looking for inter-conflicts...
#
# Packages (62): alsa-lib-1.0.27.2-1 damageproto-1.2.1-2 enca-1.14-1
# ffmpeg-compat-1:0.10.8-4 fixesproto-5.0-2 flac-1.3.0-1 fontconfig-2.10.93-1
# freetype2-2.5.0.1-1 fribidi-0.19.5-1 gsm-1.0.13-7 inputproto-2.3-1
# json-c-0.11-1 kbproto-1.0.6-1 lame-3.99.5-1 libass-0.10.1-1 libasyncns-0.8-4
# libdrm-2.4.46-2 libice-1.0.8-1 libjpeg-turbo-1.3.0-2 libmodplug-0.8.8.4-1
# libogg-1.3.1-1 libpciaccess-0.13.2-1 libpulse-4.0-2 libsm-1.2.1-1
# libsndfile-1.0.25-2 libtheora-1.1.1-3 libva-1.2.1-1 libvdpau-0.7-1
# libvorbis-1.3.3-1 libvpx-1.2.0-1 libx11-1.6.1-1 libxau-1.0.8-1
# libxcb-1.9.1-2 libxdamage-1.1.4-1 libxdmcp-1.1.1-1 libxext-1.3.2-1
# libxfixes-5.0.1-1 libxi-1.7.2-1 libxrender-0.9.8-1 libxtst-1.2.2-1
# libxxf86vm-1.1.3-1 mesa-9.1.6-1 mesa-libgl-9.1.6-1 opencore-amr-0.1.3-1
# openjpeg-1.5.1-1 orc-0.4.17-1 recode-3.6-7 recordproto-1.14.2-1
# renderproto-0.11.1-2 rtmpdump-20121230-2 schroedinger-1.0.11-1
# sdl-1.2.15-3 speex-1.2rc1-3 v4l-utils-0.9.5-2 wayland-1.2.0-1
# x264-20130702-2 xcb-proto-1.8-2 xextproto-7.2.1-1 xf86vidmodeproto-2.3.1-2
# xproto-7.0.24-1 xvidcore-1.3.2-1 motion-3.2.12-10
#
# Total Download Size: 16.99 MiB
# Total Installed Size: 89.23 MiB
#
# :: Proceed with installation? [Y/n] y
# :: Retrieving packages ...
# libjpeg-turbo-1.3.0-2-armv6h 265.3 KiB 241K/s 00:01 [###################] 100%
# v4l-utils-0.9.5-2-armv6h 423.5 KiB 453K/s 00:01 [###################] 100%
# alsa-lib-1.0.27.2-1-armv6h 341.0 KiB 35.7K/s 00:10 [###################] 100%
# gsm-1.0.13-7-armv6h 33.9 KiB 131K/s 00:00 [###################] 100%
# ( 9/62) installing fontconfig [#####################################] 100%
# (61/62) installing ffmpeg-compat [#####################################] 100%
# (62/62) installing motion [#####################################] 100%

useradd -m pi
# Create a user for normal logins

passwd pi
# pi
# Enter new UNIX password:
# Retype new UNIX password:
# passwd: password updated successfully

ssh pi@192.168.2.9
# pi@192.168.2.9's password:

mount
# /dev/mmcblk0p5 on / type ext4 (rw,relatime,data=ordered)
# devtmpfs on /dev type devtmpfs (rw,relatime,size=84784k,nr_inodes=21196,mode=755)
# /dev/mmcblk0p3 on /home type ext2 (rw,relatime)
# The line above is good!
# /dev/mmcblk0p1 on /boot type vfat (rw,shortname=mixed,errors=remount-ro)

mkdir -p /home/pi/motion
# Create a place for our stuff

chmod 750 motion
# Make it a little safer.

systemctl enable motion.service
# ln -s '/usr/lib/systemd/system/motion.service' '/etc/systemd/system/multi-user.target.wants/motion.service'

vi /etc/systemd/system/multi-user.target.wants/motion.service

Meeting-2013-07-27

There will be a meeting Saturday, July 27th, 2013, 10:00am, at the Pleasant Ridge Branch of the Cincinnati Public Library, located at 6233 Montgomery Road, Cincinnati, OH

Google Maps Goodness

I don’t have a topic yet, if there’s something you want to hear about, let me know, but leave out the Spam!
Steve Jones E-Mail

Thank you and hope to see you at the meeting!

Steve Jones

The Meeting Minutes were as follows;
Bill Stowell says:
2013/07/29 at 10:44

My notes on Meeting 7-27-13: Please feel free to comment and add or modify my recollections based on your recollections.

Ideas for the Cincinnati Linux Users Group From meeting 7-27-13

I. Possible interactions with other groups:
a) Python Group in Cincinnati, have members of their group give lectures to Clug, have a joint meeting from time to time
b) University of Cincinnati Linux group, have members of their group give lectures to Clug, have a joint meeting from time to time

interact via special interest groups

Need: Point of contact for discussions
II. Special Interest Groups:
a) Linux Certification Group
1. Comprised of folks who would like to become certified linux administrators, software experts or other.
2. Lay-out certification requirements and materials/study/information needs
3. Develop timelines and milestone events
4. Get the work done—
b) Vulnerability studies group
1. Folks interested in IT security, Current tools for breaching computer security and how they work
2. Choose various available tools to study; obtain source code; ask questions

II. Desired Presentations:
a) New presentation on how to set up MYTH TV
1. Includes general discussion on how the program works and general hardware discussion to include problem areas and insights
2. Includes one “set-up for Dummies”–i.e. This is the specific hardware, software and set-up used and it works.
b) How to set up a web server on Raspberry Pi

c)How to set up and use Amazon cloud/web services

d) Raspberry PI/Arduino Robot how it was done and details

III. Meeting Format:
1) Start meeting with introductions and possible with a linux question of some sort

2) Always have a time to “Solve the Problem” or “Answer the Question”

IV: Other: Community Involvement*

a) Talk with the library we meet in about having a linux install meeting
1. Talk with vendors, refurbishers, schools, businesses about free computers for distribution to the community
b) Talk with churches, schools, etc. about having a linux install time
* The first contacts/viable places to help folks with linux will be the first one we do. In CLUG each member is “the leadership”.

Meeting-2013-06-22

The June meeting will be the CLUG picnic at Rentschler Forest in Butler county!

When: Saturday, June 22nd. 10:30 – Dark
Where: GE Shelter, Rentschler Forest (Follow Signs)
Who: Members (and family) are invited, others may join at the picnic!!!

Now, the important issues, what do we want to eat?
E-Mail me with your dietary delights, but leave out the Spam!
Steve Jones E-Mail

    Bring something to share: Beverages, Side dishes, Desserts!
    We’ll provide meat, flatware, napkins, cups, etc.
    Also, bring outdoor games, group games, maybe water balloons, anything!
    Electricity is NOT available at the site.
    A Motor Vehicle Permit is not required (it’s included with the shelter)
    Membership rates are prorated for new members.
    Since we haven’t collected dues yet, current members are asked to pay up.

Information about the park;

Address of the preserve: 5701 Reigart Rd, Hamilton, OH 45011

The preserve is North of Hamilton, very near the intersection of Route 4 and the Route 4 Bypass.


View Larger Map

Mass mailing from the command line!

This is the little script that sent out the messages for the picnic!

#!/bin/bash
for Each in ` cat CLUGers.txt `
do cat Message.txt | mutt -s "June CLUG meeting" $Each
echo $Each
sleep 10
done

Message.txt is a file in the local directory that contained the body text of the e-mail, CLUGers.txt is a list of email addresses, one per line of the people we wanted to send the e-mail to.

Meeting-2013-05-25

There will be a meeting Saturday, May 25th, 2013, 10:00am, at the Pleasant Ridge Branch of the Cincinnati Public Library, located at 6233 Montgomery Road, Cincinnati, OH

Google Maps Goodness

I don’t have a topic yet, if there’s something you want to hear about, let me know, but leave out the Spam!
Steve Jones E-Mail

Thank you and hope to see you at the meeting!

Steve Jones

Automating Simple Tasks on Linux (Shell Scripts and other Simple Tools) Shells

Monty Stein Mar 24, 2000

There are 2 major groupings of shells in common use. Bourne derived shells evolved from the first Unix shell. Bourne (sh), Korn (ksh) and Bourne Again Shell (bash) are the major variants in use. The other evolutionary branch are the C shells from the early Berkeley Unix systems. csh is the form that most people see. Apparently it is a necessary feature of any shell to have the name form a pun.

Everything below will use Bourne style syntax since it will work on the broadest set of shell programs.

When using a shell it isn’t directly apparent how much power you have at your fingertips. There is a language that shells understand (embodied in shell scripts) that is also available when you are typing in front of one in interactive mode or put into files as scripts.
Globbing
Globbing is the step that is performed on wildcards in filenames to expand them into real filenames. Thus *.log is expanded to a list of all the files in the directory that match the pattern.

*
matches any number of characters
?
matches any one characters
[]
matches a range of characters (like [0-9] for all the numbers)

For example:

echo * # when you don’t have enough of
# a system left to run ls

Input/Output
There are 3 I/O streams that are set up by the shell for any program that it starts. Standard Input (stdin or stream number 0), Standard Output (stdout or #1), and Standard Error (stderr or #2). Normally the program sees it’s input from stdin, sends output to stdout and reports any errors to stderr. These streams can be manipulated before the program sees them by redirecting them. Normal forms of redirection are simply to and from files:

someprog output_file

In this case the input and output are handled by files and any errors would likely be reported to the screen. (Programs, of course, are not limited to use only these streams. However, most shell programming uses support programs of this form.)
Streams can be joined by referencing them by number. Thus:

someprog output.log 2>&1

joins stderr (#2) and stdout (#1) together and put them into output.log. Ordering is critical with this form. The steps read from right to left.

There is one more form of stream and it is called a “Here Document”. Its use is rare. Check the manual page for more information.
Variables
Environment variables are most commonly used to pass some tidbit of information to a program that needs login or machine specific configuration (such as what X display your programs should display to).

To set a variable, the name of the variable is used with an equals sign immediately following (naming convention for environment variables is to use upper case). To access the contents of the variable, use a dollar sign and then the variable name:


/tmp-> VAR=contents
/tmp-> echo $VAR
contents

In places where the name of the variable would touch another alphanumeric character, the variable can be bracketed by curly braces to force the correct behavior:


mv $VAR ${VAR}old

(in this case, the shell would be looking for a variable VARold if the braces were not used)

Variables can be exportable or not. Exported variables are passed to programs that the shell starts (and to any that they start). Unexported variables are restricted to the current shell. Any number of variables can be exported with the export command:

/tmp-> VAR=contents
/tmp-> sh -c 'echo $VAR' # start a subshell as another process

/tmp-> export VAR
/tmp-> sh -c 'echo $VAR' # start a subshell with the exported value
contents

Remember, under the design of the Unix process model, programs spawned by the shell cannot set variables in the parent shell. The way around this is to source a file in the current shell:

/tmp-> . somefile

would run somefile as if it was typed in at that point.

Shorts:

env
will print all the variables that the shell knows about.
export
without any arguments will print all exported variables.
unset
will remove a variable.

Special Variables

$?
return code of the last program
$!
PID of last background process spawned
$$
PID of the current shell, useful for creating temp files
$*
all the arguments to this shell
$0
what this shell was called by
$1 … $9
Arguments to the shell script (more args are there, just call shift to get to them)

Quoting
The shell supports a rich variety of common quoting types (all right, 5).
Double quotes bracket strings and allow variables to be expanded.
Single quotes bracket strings and do not allow any shell string operations inside.
Back quotes will run the contents in a separate shell and return the output IN PLACE.
A backslash will quote a single character.
A pound sign “#” will comment out the rest of the line

/tmp-> VAR=contents
/tmp-> echo "this string has $VAR"
this string has contents
/tmp-> echo "this string has \$VAR"
this string has $VAR
/tmp-> echo 'this string has $VAR'
this string has $VAR
/tmp-> echo `echo $VAR|tr a-z A-Z`
CONTENTS

Job Control
Programs can be run in the background by putting an ampersand “&” at the end of the line. Last backgrounded job process ID is in the “!” variable. wait will wait on all background jobs to finish when called with no arguments, or by PID when it is given as the argument. Wait returns the exit code of the process that it waited on.
Testing and Control Structures
Commands that have completed successfully return a value of 0. This way errors can be identified by a rich set of return codes (thus TRUE is equal to 0 and FALSE is everything else). The last command’s return value is stored in the “?” variable.

The test command sets its return code based on an expression that can return information about a file or compare strings. The test command itself can be accessed by the “[” “]” pair. (For the historically minded: The original Bourne shell didn’t have the square bracket shortcuts and the test program had to be called directly. That is why if you want to get a list of all that test can test you have to run man test.)
A call to test would look something like:

[ -r /tmp/output ]

This would read: If the file /tmp/output is readable, return a TRUE exit status.
Note: When testing the contents of variables, always put them in double quotes. This avoids the problem of:

VAR=""
[ $VAR = junk ]

The shell sees:

[ = junk ]

if $VAR is quoted, the empty string would be visible to the shell.
Check the manual page for test for all the different options that it takes. Note: Some of the test options are platform dependent. Keep them simple for portability sake, use -r (readable) instead of -e (exists) which does not work with HP-UX’s test.
if/then/elif/else/fi

if expression
then
...
elif expression
then
...
else
...
fi

The classical if statement. If the result of expression is TRUE, then execute the then block. The expression can be a call to test or any other program.

if [ ! -d tempdir ] # create a temporary directory
then # if it doesn't already exist
mkdir tempdir
fi

Lazy Evaluation
Lazy evaluation takes advantage of the fact that if A and B must be true in order for something to happen and A is not true, there isn’t any point in evaluating B (the converse for “or” also applies, it just flips the logic). It can shorten simple tests in scripts.
For example: This:

if [ -r somefile ]
then
cat somefile
fi

Will run identically to:

[ -r somefile ] && cat somefile

For “or” the logic inverts. If the first command is not TRUE, execute the second:

[ $RETURNCODE -eq 0 ] || echo "command failed"

case/esac
A case statement allow simplification of lots of nested if/then blocks. It takes the form of:

case value in
pattern1)
...
;;
pattern2)
...
;;
esac

The patterns are matched against the value using the same expansion rules that would be used for filename globbing.
Functions
Functions encompass small, often called portions of a script. From the outside the function looks like another program. From the inside the function everything looks like it is running in a separate shell except that all the variables from the parent script are available to the function for reading and writing.
The return (exit) value is passed with a call to return.

Functions look like:
function somefunc {
echo "hello $1"
return 0
}

And are called like:

somefunc "world"

Looping
for/do/done
For loops iterate over a list of values and execute a block for each entry in the list. They look like

for variable in list
do
...
done

list can be the output from a program or a globbed list. For example:

for filename in *.c
do
cp $filename $filename.backup
done

while/do/done
While loops look and act a lot like for loops, but will loop indefinitely until an expression returns FALSE. It looks like:

while expression
do
...
done

For example:

while [ ! -r "STABLE" ]
do
echo "waiting on STABLE flag"
sleep 60
done

Common Commands
ls
ls will LiSt the contents of a directory. Since most people learn about this command fairly quickly, I’ll focus on the more useful flags:
-S sort by size (GNU only)
-t sort by time last modified
-r reverse sort
-a all files
-d do not enter directory
cut
cut will cut lines of text by column or by delimiter.
-c10-24 would output columns 10 through 24
-d: -f1,3 would output the second and fourth fields delimited by a colon

this:that:the other

returns

this:the other

sed
Stream EDitor. Will run ed style commands on files. The most common way to use it is for search and replace on the fly.

the first line

sed s/first/second/g returns

the second line

tr
character TRanslator. tr can translate one set of characters into another as well as suppress duplicate input characters.

lower case

tr a-z A-Z returns

LOWER CASE

the -s switch will force the suppression of duplicated sequences of characters

this that the other

tr -s ‘ ‘ returns

this that the other

(useful to preprocess a tabular report into something that cut can work on) The -d switch will delete characters (can be very useful with the -c complement switch to return only a given set of characters).

wc -l somefile|tr -cd “0-9” # gives the number of lines
# w/ no other chars

sort
sorts files. The most common way to use this is sort -u to suppress duplicated lines after the sort. A -r will reverse the sort, -n will attempt to convert string based numbers into machine numbers for the sort.
find
Find recursively descends into filesystems and (in the simplest form) prints filenames based on certain criteria.

find . -type f -print

will print the names of all files below the current directory

find . -newer /etc/lastbackup -type d -print

will print the names of all directories that have had files added or deleted since the file /etc/lastbackup was last modified.
xargs
xargs will build up command lines from standard input. When supplied a command to run, it will execute that command with as many arguments built up from it’s input as the OS will allow. -n num will limit the number of args passed to each command invocation

find . -type f -print|xargs grep “your keys”

would search all files below and in the current directory for the string “your keys”
Running Jobs at Certain Times
cron
The crond daemon runs once a minute and runs any jobs scheduled by crontab or at. It normally handles all the recurring jobs that maintain the system. It can also be a huge security hole (there was a notable problem with the vixiecron system in RH5 series). Because of the problems that it use can cause, the cron system has built in to it a way of restricting its use by the allow and deny files that are stored in /etc.
crontab
Each user (that is allowed to) has a crontab file that is read/written with the crontab command. Crontabs are used for jobs that need to be run at regular recurring points in time. The crontab file has this structure:

minute hour month-day month weekday job

So, to fetch mail using a script called /usr/local/bin/getmymail every minute during business hours:

* 7-17 * * 1-5 /usr/local/bin/getmymail

Read as: for every minute between 7am to 5pm from Monday (day 1) to Friday (day 5) run the job /usr/local/bin/getmymail.

Use crontab -l to get the contents of your crontab entries. It is a very good idea to keep a master copy that you can edit and reload.

A possible edit session would be:

crontab -l >mycrontabfile

edit mycrontabfile

crontab mycrontabfile

The scripts that are run will not have any variables but the minimal user environment set. Any scripts that are run should set up any variables that they need (an expanded $PATH variable for example) or assume nothing about the environment they will be running in. Any output generated will be mailed to the user.
at
Runs a job at a specific time. Differs from crontab in that it will run the job only once and that all environment variables are carried through from the shell that called it. The script is to come in via standard input any output will be mailed to the user.

At allows easy setting of the time that the script is to be run:

echo “myscript”|at now + 5 hours

would run myscript 5 hours from now.

echo “someotherscript”|at 5 pm Tuesday

would run someotherscript at the next 5pm on a Tuesday. Be certain to double check the date that at reports when the job is scheduled so that it is what you expected.

At will also run jobs by date:

echo “were you fooled?”|at 5 pm april 1

at -l will list all pending jobs.
atrm (or at -r on some systems) will remove a numbered job.
batch
Close to at now, but holds the job until the load average falls below 0.8 as well as running the job at low priority. Play nice.