Category Archives: code - Page 2

Python function to update a var in a list of tuple

As part of my new ssh-fail script, now written in python I found myself needing to update a var in a list of a list, but you can’t just do list_a[3][3]= ‘new thing’ :(

so i wrote this function:

def tuple_update(touple, varloc, newval):
    temp = []
    for a in range(len(tuple)):
        if varloc != a:
    return temp

and you can use it like this:

>>>ip_test=[('', 423, None, 0), ('', 64, None, 3), ('', 23, None, 10)]
>>> ip_test[1]
('', 64, None, 3)
>>> ip_test[1]=tuple_update(ip_test[1],1,38)
>>> ip_test[1]
['', 38, None, 3]
>>> ip_test[2]=tuple_update(ip_test[2],0,'')
>>> ip_test[2]
['', 23, None, 10]
>>> ip_test
[['', 423, None, 0], ['', 38, None, 3], ['', 23, None, 10]]

that’s: tuple_update(list,location,newval)

SourceForge direct download with clean filename

If you ever wget files from source forge you’ll know how the file name will be annoying with a bunch of extra crap



file=`echo $1 | sed 's,.*/,,g; s/?.*//'`
echo $file
wget "$1" -O ./$file


% ~/ ""
--2012-12-03 10:18:49--

Length: 2611269 (2.5M) [application/x-bzip2]
Saving to: `./GNS3-'

100%[=======================================================================================================================================================================>] 2,611,269   2.37M/s   in 1.1s    

2012-12-03 10:19:16 (2.37 MB/s) - `./GNS3-' saved [2611269/2611269]
( ~/del/sftest )% ls


Bash one liner: rDNS of failed ssh logins

Quickly get the rDNS of each IP that failed to login to your ssh :D

grep Failed /var/log/auth.log|grep -v pronto|sed 's/.*from //;s/ port.*//'|sort -u|while read host
        host "$host"
Host not found: 2(SERVFAIL)
Host not found: 2(SERVFAIL)
Host not found: 3(NXDOMAIN)
Host not found: 3(NXDOMAIN) domain name pointer
Host not found: 3(NXDOMAIN)
Host not found: 3(NXDOMAIN)
Host not found: 2(SERVFAIL) domain name pointer
Host not found: 3(NXDOMAIN)
Host not found: 3(NXDOMAIN) domain name pointer
Host not found: 2(SERVFAIL) domain name pointer
Host not found: 3(NXDOMAIN) domain name pointer domain name pointer
Host not found: 3(NXDOMAIN)

:D    also you can replace the “host “$host”  part with:    whois “$host” > $host    and quickly whois each IP as well, I recommend doing this in it’s own DIR though.  Then just do less *   and :n  to go to next file

Breakdown on the one liner for people new to linux/bash/celery
This part is pretty self explanatory, just greping auth.log for Failed, then grep -v is an inverse grep getting rid of my user name

grep Failed /var/log/auth.log|grep -v pronto

This part using is removing everything up to and including the word ‘from’ then everything and including the word ‘port’
the sed command is acutely doing two sed actions separated via a semicolon (no need to pipe sed to sed)

sed 's/.*from //;s/ port.*//'
the original line looks like:
Nov  9 08:22:56 tasty sshd[25254]: Failed password for root from port 54268 ssh2
then end result is just ""

for more useful sed one liners check out this page
this next part just sorts the massive list, and the -u flag only shows the unique ones

sort -u


Evil Javascript and

As someone who likes to select text as they read it, use of javascript to disable that is rather annoying.

yes I know about noscript/etc… but they shouldn’t be disabling text selection in the first place, it does nothing to protect content

…to prove it, i cloned all of and disabled that javascript

eg: (if you have JS enabled) no text selection, lame! yay, can has text selection

full site:


du -sh ./* | grep snopes
144M ./
13M ./snopes.js.tar.bz2
(text always impresses me how well it compresses)
# find|wc -l
# find -name "*.html"| wc -l
that means: 6145 total files, 5328 are html pages for the stories


you may find yourself asking “how the hell?”
simple! wget + find + xargs + sed + bored

wget \
--recursive \
--no-clobber \
--page-requisites \
--html-extension \
--convert-links \
--restrict-file-names=windows \
--no-parent \
-D \
--user=agent="Mozilla/5.0 (X11; Ubuntu; Linux x86_64; rv:15.0) Gecko/20100101 Firefox/15.0.1" \
<a href="" target="_blank"></a>

then to disabled that javascript

find . -name "*html" -print | xargs sed -i 's/var omitformtags/#var omitformtags/g'

^ what that does is list every file ending in: “.html” and makes it a massive list, kinda like:

# find -name "*.html"| tail

then it passes that list off to xargs, which runs the sed command on each file to comment out “var omitformtags” which in return breaks the JS that disables text selection.

took all of ~20 minutes to grab every file on via that, then a few seconds to disable that javascript on 5328 html files

this is not only a lesson in dont annoy linux geeks, but also automation and how to edit 5000+ files in seconds

custom htop theme

you can get it here



Made my own htop theme :D
red ftw.



..yes, i named the theme “Pronto”