ruby.onl / one-liners

Sysadmin Scripts: Seven Ready-to-Run Ruby Utilities

2026-03-07

These aren't toy examples. They're real scripts for real operations work. Failed SSH login tracking, Apache log analysis, config section extraction, disk space monitoring, log time filtering, process checking, and JSON log parsing. Copy, paste, run. Modify to taste.

Part 1: Failed SSH Login Tracker

#!/usr/bin/env ruby # Track failed SSH login attempts by IP # Usage: ruby failed_ssh.rb /var/log/auth.log # Or: cat /var/log/auth.log | ruby failed_ssh.rb failed = Hash.new(0) ARGF.each_line do |line| if line =~ %r~Failed password.*from\s+(\d+\.\d+\.\d+\.\d+)~ failed[$1] += 1 end end failed.sort_by { |ip, count| -count }.each do |ip, count| printf "%6d %s\n", count, ip end
Perl equivalent: same structure with while (<>) and $failed{$1}++. The Ruby version reads a little cleaner.

Part 2: Apache/Nginx Access Log Analyzer

#!/usr/bin/env ruby # Parse common log format, report top URLs and status codes # Usage: ruby access_report.rb /var/log/nginx/access.log urls = Hash.new(0) statuses = Hash.new(0) ips = Hash.new(0) ARGF.each_line do |line| # Common log format: IP - - [date] "METHOD URL PROTO" STATUS SIZE if line =~ %r~^(\S+) .* "(GET|POST|PUT|DELETE) (\S+) .*" (\d{3})~ ips[$1] += 1 urls[$3] += 1 statuses[$4] += 1 end end puts "--- Top 10 URLs ---" urls.sort_by { |url, count| -count }.first(10).each do |url, count| printf "%8d %s\n", count, url end puts "\n--- Status Codes ---" statuses.sort.each do |status, count| printf "%8d %s\n", count, status end puts "\n--- Top 10 IPs ---" ips.sort_by { |ip, count| -count }.first(10).each do |ip, count| printf "%8d %s\n", count, ip end

Part 3: Config File Section Extractor

Pull a specific section from an INI-style config file using the flip-flop operator.
#!/usr/bin/env ruby # Extract a section from an INI-style config # Usage: ruby extract_section.rb database /etc/myapp.conf section = ARGV.shift # first arg is section name, rest are files ARGF.each_line do |line| # Flip-flop: true from [section] until next [anything] if line =~ %r~\[#{section}\]~ .. (line =~ %r~\[~ && $. > 1) # Skip the next section header but print everything else puts line unless line =~ %r~\[~ && line !~ %r~\[#{section}\]~ end end

Part 4: Disk Space Monitor

#!/usr/bin/env ruby # Check disk usage and warn on high utilization # Usage: ruby diskcheck.rb [threshold] threshold = (ARGV.shift || 80).to_i %x~df -h~.each_line do |line| next if line =~ %r~^Filesystem~ # skip header fields = line.split usage = fields[4].to_i # "85%" becomes 85 mount = fields[5] if usage >= threshold printf "WARNING: %s at %d%% (%s used of %s)\n", mount, usage, fields[2], fields[1] end end

Part 5: Log Timestamp Range Filter

#!/usr/bin/env ruby # Filter log lines within a time range # Usage: ruby timefilter.rb "2024-01-15 10:00" "2024-01-15 12:00" /var/log/app.log require 'time' start_time = Time.parse(ARGV.shift) end_time = Time.parse(ARGV.shift) ARGF.each_line do |line| # Adjust regex to match your log timestamp format if line =~ %r~^(\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2})~ ts = Time.parse($1) puts line if ts >= start_time && ts <= end_time end end

Part 6: Process Monitor

#!/usr/bin/env ruby # Check if critical processes are running # Usage: ruby proccheck.rb critical = %w~nginx postgresql sshd cron~ running = %x~ps aux~ missing = [] critical.each do |proc| unless running =~ %r~#{proc}~ missing << proc end end if missing.empty? puts "All critical processes running" else STDERR.puts "ALERT: Missing processes: #{missing.join(', ')}" exit 1 end

Part 7: Quick JSON Log Parser

#!/usr/bin/env ruby # Parse JSON-formatted log lines, extract key fields # Usage: ruby jsonlog.rb /var/log/app.json.log require 'json' ARGF.each_line do |line| begin entry = JSON.parse(line.chomp) # Adjust field names to match your log format printf "%s [%s] %s\n", entry['timestamp'] || '-', entry['level'] || '-', entry['message'] || '-' rescue JSON::ParserError STDERR.puts "Skipping malformed line #{$.}: #{line.chomp[0..60]}" end end

Part 8: The Universal Pattern

Every script above follows the same workflow:
  1. Setup: Create counting hashes or variables with Hash.new(0) or Hash.new { |h,k| h[k] = [] }
  2. Loop: Use ARGF.each_line to process input from files or stdin
  3. Match: Use regex to find and extract data with capture groups
  4. Accumulate: Build up counts, lists, or grouped data
  5. Report: Sort and format output at the end

This is the same workflow you'd use in Perl with while (<>), $hash{$key}++, and formatted output. The Ruby version just reads a bit cleaner. Learn this pattern once, apply it everywhere.


Created By: Wildcard Wizard. Copyright 2026