Nicholls State University website 2015-2018

The Nicholls State University website has been a project of continuous iteration and enhancements. The 2015 redesign project was no exception with a WordPress theme based on the Underscore(_s) starter theme using Sass to organize and simplify complex CSS techniques and customized PHP to create useful tools to communicate with visitors. Some of the custom applications include the homepage calendar feed that actually uses the Google Calendar API which allowed non-technical web editors to help update events using the calendar system they were already using as part of their daily duties.

This website was designed to be responsive based on screen size allowing the site to be useful with multiple devices. During this projects development around 60% of the visitors were using either phones or tablets when interacting with website.

NPM and Yeoman update/install notes for MacOS

Had to deal with some issues introduced by bad homebrew installations and possible old NPM installs. My guess is that I probably cheated/screwed up using a sudo install when I should have done something different.

This all started playing with the WebDevStudions WordPress plugin generator for yeoman.

Get a Plugin Kickstart with Yeoman & generator-plugin-wp!

This gist was very helpful – Fixing npm On Mac OS X for Homebrew Users

Another strange issue was caused by installing the generator-plugin-wp yoeman generator on a broken yeoman/npm system. This posted github issue helped. https://github.com/npm/npm/issues/10995 This needed a straightforward uninstall, cache clear, and reinstall. I may have reintialized my terminal sessions to drop out of sudo mode. It looked something like:

$ sudo npm remove -g yo generator-plugin-wp
$ npm cache clean
-- can't remember if I relaunched terminal to drop out of sudo mode.
$ npm install -g generator-plugin-wp

I’ll be testing this yeoman generator on the side for a while: https://github.com/WebDevStudios/generator-plugin-wp

Possum Wasp

So last night I attempted to fight off an urban possum with a can of wasp spray. It was all I had and the possum didn’t exactly run off and instead walked away sort of angry looking. Now if my poor choice of weaponry caused some angry mutation in downtown Houma you have my apologies.

IMG_0126

Spying on a directory with auditd

Files start coming up missing for me on a server and I get freaked out looking for security holes, but sometimes users and other utilities are spiking the bunch bowl. You can get serious with watching files with other utilities, but I went back to good ole auditd.

A simple test to track stuff getting trashed from an upload folder:

auditctl -w /site-dir/wp-content/uploads/ -p wa -k upload_issue

A capital W will remove the rule:

auditctl -W /site-dir/wp-content/uploads/ -p wa -k upload_issue

Do a quick search for issues with ausearch.

ausearch -f wp-content/uploads

Now permanently add the rule on a redhat system by putting this line in /etc/audit/audit.rules. Just leave off the auditctl command.

 -w /site-dir/wp-content/uploads/ -p wa -k upload_issue

Of course you need to make sure your auditd process is running and using chkconfig, etc. Good ole check status like:

/etc/init.d/auditd status

Here are a few of the resources I used:

Please forgive the RedHat auth-walls…

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Security_Guide/sec-Defining_Audit_Rules_and_Controls.html
http://stackoverflow.com/questions/29519590/monitor-audit-file-delete-on-linux
http://www.cyberciti.biz/tips/linux-audit-files-to-see-who-made-changes-to-a-file.html

My Favorite Httrack commands

HTTrack is a website mirroring utility that can swamp your disks with mirror copies of the internet. I’ve had to use it several times to make off-line copies of websites for all sorts of weird reasons. You’ll find HTTrack at: www.httrack.com. You can get a full list of command line options at: https://www.httrack.com/html/fcguide.html. There is a spiffy web and Windows wizard interface for HTTrack, but I gave that up.

This is the recipe for the command line options I’ve been using to produce a browse-able offline version of accreditation documents. This command says “Make an offline mirror of these URLs, go up to 8 links deep on these sites and 2 links deep on other domains. Stay on the TLD (.edu) and do it as quickly as possible. Be warned as it currently stands this will fill up about 1.5GB of disk space ;P.

httrack http://www.nicholls.edu/sacscoc-2016/ http://www.nicholls.edu/catalog/2014-2015/html/ http://www.nicholls.edu/about/ -O /Users/nichweb/web-test -r8 -%e1 -%c16 -*c16 -B -l -%P -A200000

The great part is that the archive grows as URLs are added.

Apache log one-liners using tail, awk, sort, etc.

Good bunch of samples with other examples found at: https://blog.nexcess.net/2011/01/21/one-liners-for-apache-log-files/

# top 20 URLs from the last 5000 hits
tail -5000 ./transfer.log | awk '{print $7}' | sort | uniq -c | sort -rn | head -20
tail -5000 ./transfer.log | awk '{freq[$7]++} END {for (x in freq) {print freq[x], x}}' | sort -rn | head -20
 
# top 20 URLS excluding POST data from the last 5000 hits
tail -5000 ./transfer.log | awk -F"[ ?]" '{print $7}' | sort | uniq -c | sort -rn | head -20
tail -5000 ./transfer.log | awk -F"[ ?]" '{freq[$7]++} END {for (x in freq) {print freq[x], x}}' | sort -rn | head -20
 
# top 20 IPs from the last 5000 hits
tail -5000 ./transfer.log | awk '{print $1}' | sort | uniq -c | sort -rn | head -20
tail -5000 ./transfer.log | awk '{freq[$1]++} END {for (x in freq) {print freq[x], x}}' | sort -rn | head -20
 
# top 20 URLs requested from a certain ip from the last 5000 hits
IP=1.2.3.4; tail -5000 ./transfer.log | grep $IP | awk '{print $7}' | sort | uniq -c | sort -rn | head -20
IP=1.2.3.4; tail -5000 ./transfer.log | awk -v ip=$IP ' $1 ~ ip {freq[$7]++} END {for (x in freq) {print freq[x], x}}' | sort -rn | head -20
 
# top 20 URLS requested from a certain ip excluding, excluding POST data, from the last 5000 hits
IP=1.2.3.4; tail -5000 ./transfer.log | fgrep $IP | awk -F "[ ?]" '{print $7}' | sort | uniq -c | sort -rn | head -20
IP=1.2.3.4; tail -5000 ./transfer.log | awk -F"[ ?]" -v ip=$IP ' $1 ~ ip {freq[$7]++} END {for (x in freq) {print freq[x], x}}' | sort -rn | head -20
 
# top 20 referrers from the last 5000 hits
tail -5000 ./transfer.log | awk '{print $11}' | tr -d '"' | sort | uniq -c | sort -rn | head -20
tail -5000 ./transfer.log | awk '{freq[$11]++} END {for (x in freq) {print freq[x], x}}' | tr -d '"' | sort -rn | head -20
 
# top 20 user agents from the last 5000 hits
tail -5000 ./transfer.log | cut -d  -f12- | sort | uniq -c | sort -rn | head -20
 
# sum of data (in MB) transferred in the last 5000 hits
tail -5000 ./transfer.log | awk '{sum+=$10} END {print sum/1048576}'