Friday, March 17, 2017

Weird missing drives error on a not-that-old laptop

I was working on a ASUS "Republic of Gamers" laptop for a coworker the other night. An otherwise decent piece of hardware was being operated off a 5400 RPM "quiet" hard drive: so I migrated the data over to a spare 500GB SSD using Parted Magic, and moved the old drive to the second bay. Pretty straightforward so far.

I probably spent the next hour trying to figure out why neither hard drive was coming up as a boot option. Disabling Secure Boot, re-enabling, togging the CSM, trying to do Startup Repair with the Windows 10 USB drive the system could detect, a BIOS/UEFI update... It turns out the BIOS/UEFI was detecting partitions, not physical drives; and I had partitioned the drives as GPT, not MSDOS format. Using Parted Magic again (gdisk and fdisk, specifically), I converted the partition tables back to MSDOS format, and then attempted to fix the Windows startup. I used instructions similar to these for getting things going again: boosect.exe /nt60 all was the magic command in the Windows 10 recovery command prompt.

Newer systems and laptops should be just fine with GPT, but this was interesting to me that the boot order of a UEFI system, was not detecting GPT-enabled drives.

Wednesday, March 8, 2017

De-duplicating XML validator in C#

Thought I needed XML validation for a project I'm working on. Wanted to be able to merge together several different stylesheets to check against. Also kept running into an error of Wildcard '##any' allows element

Overall, this seems to kinda work, but I may or may not need it. But maybe someone else can get it to work better. Original fix for duplicates I found

Needs System.Xml.Schema & System.Collections.Generic

public static XmlSchemaSet MergeSchemaFiles(string[] schemaFiles)
{
 // Get List of Schemas
 var schemas = new List();
 int sfi = 0;
 foreach (var sf in schemaFiles) {
  var tempFileXSS = new XmlSchemaSet();
  tempFileXSS.Add(null,sf);
  schemas.Add(tempFileXSS);
  schemas[sfi].CompilationSettings.EnableUpaCheck = false;
  schemas[sfi].Compile();
  Console.WriteLine("Loading schema from: " + sf + ", with " + schemas[sfi].GlobalElements.Values.Count + " elements.");
  sfi++;
    }
    // Merge schemas into one schema set: avoid duplicates
    var tempXSS = new XmlSchemaSet();  
    tempXSS.Add(schemas[0]);  
    for (int i = 1; i < schemas.Count; i++) {
  foreach (XmlSchemaElement xse0 in schemas[0].GlobalElements.Values) {
      foreach (XmlSchemaElement xseI in schemas[i].GlobalElements.Values) {
    if (xseI.QualifiedName.Equals(xse0.QualifiedName)) {  
       ((XmlSchema)xseI.Parent).Items.Remove(xseI);  
       break;
    }
      }
  }
  foreach (XmlSchema schema in schemas[i].Schemas()) {
      schemas[i].Reprocess(schema);
  }
  schemas[i].Compile();
  tempXSS.Add(schemas[i]);
    }
    // Return results
    Console.WriteLine("Retained " + schemas.Count + " XML schemas");  
    return tempXSS;
}

Thursday, February 9, 2017

C# snippet on easy multi-threading / parallel processing

There's been a need of mine lately to figure out how to successfully execute parallel programming tasks in C#. Finally figured out a basic way to do this, that uses current versions of .NET. The Threading in C# website was key to this.

using System.Threading.Tasks;
// The "i" is an iterator you can refer to in the body
Parallel.ForEach(array, (c, state, i) => {
class.function(parameters);
});
On an unrelated note: if you have to parse XML for some reason, XElement seems to be much easier to use than the older libraries.

Added: someone else wrote a good comparison of different threading methods in .NET.

Friday, January 29, 2016

Multi-hosted Nginx on RHEL 7 / CentOS 7 with PHP support

I learned recently that one can construct a pretty effective multi-hosted Nginx + PHP server on CentOS. There are a number of obstacles to deal with, but once handled, the results are promising. This configuration handles a basic wildcard / multi-site setup: I imagine you could could expand it to support HTTPS via Let's Encrypt or wildcard domain certs via additional Nginx config files.

This setup in particular solves two problems: a "wildcard" Nginx setup; and the ability to easily create and delete hosted websites. This draws upon some references, older blog posts, and stuff done at the office. Assume any commands require sudo/root and an SSH terminal to complete.

1. Setup your CentOS VM as you normally would for your environment. If installing FirewallD, be sure HTTP / HTTPS services are open.
2. yum install epel-release pwgen zip unzip bzip2 policycoreutils-python -y : ensure some basic essentials are loaded. Also make sure your favorite editor (vim / nano / whatever) is installed.
3. Install Nginx from their repo.
4. Install PHP 5.6 (5.5 for older stuff) from the Webtatic repo. One deviation: don't run yum install php56w php56w-opcache ; instead, run yum install php56w-cli php56w-opcache php56w-fpm -y for your base install (the original command loads Apache). Don't forget to load any additional PHP modules.
5. Edit /etc/php.ini : set date.timezone to a value per the timezone list, and set upload_max_filesize to a larger value if you're going to be allowing file uploads.
6. Edit /etc/php-fpm.d/www.conf : change listen.owner and listen.group to nginx ; listen.mode = 0666user and group to nginx ; pm = dynamic to pm = ondemand ; and set security.limit_extensions to allow .php .htm .html if you're going to run any PHP-in-HTML code.
7. Edit /etc/security/limits.d/custom.conf : add * soft nofile 8192 and * hard nofile 8192 to it.
8. Add the following to the end of /etc/ssh/sshd_config

Match Group nginx
        ChrootDirectory %h
        ForceCommand internal-sftp
        AllowTcpForwarding no
9. Clear out / set-aside the conf files in /etc/nginx/conf.d
10. mkdir /web && mkdir /web/scripts && mkdir /web/sites
11. systemctl enable php-fpm nginx
12. Create all the specified configuration files, then reboot your VM.
13. Start building out your sites using the "create_site" script for each one. At some point, you're going to run into SELinux permissions issues: try the following to mitigate them (you may have to do this twice to identify all the correct policies)...

cd ~
rm php.pp
rm nginx.pp
grep php /var/log/audit/audit.log | audit2allow -M php
semodule -i php.pp
grep nginx /var/log/audit/audit.log | audit2allow -M nginx
semodule -i nginx.pp


Configuration files

/etc/sysctl.d/custom.conf


net.ipv4.tcp_congestion_control=illinois
net.ipv4.tcp_tw_recycle=1
net.ipv4.tcp_tw_reuse=1
net.core.somaxconn=1024
net.core.netdev_max_backlog=2048
fs.file-max=1000000
net.core.bpf_jit_enable=1
vm.swappiness=1

/etc/nginx/conf.d/servername.conf


server {
    listen       80;
    server_name  _;
    set $site_root /web/sites/$host/public_html;
    charset utf8;
    #access_log  /var/log/nginx/log/host.access.log  main;
    location / {
        root $site_root;
        index index.php index.html index.htm;
    }
    # redirect server error pages ; customize as needed
    #
    error_page  404              /404.html;
    error_page  500 502 503 504  /50x.html;
    location = /404.html {
        root   /usr/share/nginx/html;
    }
    location = /50x.html {
        root   /usr/share/nginx/html;
    }
    # pass PHP scripts to FastCGI server
    location ~ \.php$ {
        root           $site_root;
        try_files $uri =404;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_index  index.php;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        include        fastcgi_params;
    }
    # pass HTML scripts to FastCGI server (legacy code)
    # PHP-FPM config also had to be updated to allow this
    location ~ \.html$ {
        root           $site_root;
        try_files $uri =404;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_index  index.html;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        include        fastcgi_params;
    }
    location ~ \.htm$ {
        root           $site_root;
        try_files $uri =404;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_index  index.htm;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        include        fastcgi_params;
    }
    # === Compression ===
    gzip on;
    gzip_http_version 1.1;
    gzip_vary on;
    gzip_comp_level 6;
    gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript text/x-js;
    gzip_buffers 16 8k;
    gzip_disable "MSIE [1-6]\.(?!.*SV1)";
    gunzip on;
}

/etc/nginx/conf.d/redirects.conf


# Probably could wildcard this somehow. Uncomment and use as needed.
#server {
#    server_name www.example.com;
#    return 301 $scheme://example.com$request_uri;
#}

/web/scripts/create_site (chmod 700)


#!/bin/bash
#Create a user; uncomment next two lines for password generation
useradd -s /sbin/nologin -g nginx -d /web/sites/$1 $1
#NEWPASSWORD=`pwgen -s 16 1`
#echo $NEWPASSWORD | passwd --stdin $1
#Build
chmod 755 /web/sites/$1
mkdir -p /web/sites/$1/public_html
chmod 755 /web/sites/$1/public_html
mkdir -p /web/sites/$1/private
chmod 700 /web/sites/$1/private
#Copy default files to show the site works
#(add your logic here: suggest adding robots.txt or humans.txt)
#Reset permissions as needed
chown -R $1:nginx /web/sites/$1
chown root:root /web/sites/$1
#Ensure SELinux access
restorecon -Rv /web/sites/$1
#Show new username + password (uncomment next line)
#echo "$1,$NEWPASSWORD"

/web/scripts/delete_site (chmod 700)


#!/bin/bash
userdel -r $1
rm -fr /web/sites/$1
echo $1 "deleted"

References



Additional References


Thursday, January 14, 2016

OpenVPN access on Fedora / CentOS / RHEL

SELinux and Avahi conspire to make one's use of OpenVPN on a Redhat-based Linux to be rather unpleasant. Here's how you can go about resolving that.

  • Extract any cert files from the OVPN file you received, and save them as separate files in a directory intended for said purpose.
  • The next three commands require sudo / root user...
  • semanage fcontext -a -t home_cert_t (path to certificate file) for each cert.
  • restorecon -Rv (path of certs/*) to load the new security contexts.
  • yum remove avahi if you use a ".local" or other non-standard domain name internally. A safer option is to use systemctl disable avahi-daemon.socket avahi-daemon.service in case you need to flip it back on later.
  • Import the OVPN file to the Network Manager, and configure to use the cert files + login username + password ("password w/certificates" option).

Wednesday, June 17, 2015

Junking spam email from Postfix queue

So if your  mailq | tail -n 1 shows a lot of requests, and your qshape shows a lot of deferred stuff, its time to nuke some spam backlog. Run the following as sudo ....

mailq|fgrep .science|sed 's/\*.*//'|postsuper -d -
mailq|fgrep .work|sed 's/\*.*//'|postsuper -d -
mailq|fgrep .link|sed 's/\*.*//'|postsuper -d -
mailq|fgrep .club|sed 's/\*.*//'postsuper -d -
mailq|fgrep .ninja|sed 's/\*.*//'|postsuper -d -
postsuper -d ALL deferred

... and any other domains / email addresses after the "fgrep" that look suspicious. pflogsumm is good for getting metrics on repeat offenders. Its tricky to avoid this happening in the first place.


Other References (some outdated)

https://rtcamp.com/tutorials/mail/postfix-queue/
https://www.howtoforge.com/delete-mails-to-or-from-a-specific-email-address-from-postfix-mail-queue
http://www.cyberciti.biz/tips/howto-postfix-flush-mail-queue.html

Added

There is a real issue with the output of the mailq / "postqueue -p" output, in terms of making something usable to check against. Here's a modified example from my logs, regarding a spam message that's failing to be delivered: each entry in the outputted text file has a blank line after; the Perl scripts floating around out there try to accommodate this, but poorly. A Python/Ruby script might work better for this....

91ADA1DDAFC*   12156 Wed Jun 17 13:14:32  source@example.com
                                         destination@example.com
                                         destination2@example.com

Added #2

There's an awesome RHEL / CentOS repo maintained with current Postfix builds. Was able to update 2.3 to 2.11 without immediately borking config files!

Added #3

You can define a PCRE whitelist/blacklist of domains and addresses, and refer to it in main.cf. You don't have to run "postmap" on this after updating it either.

    smtpd_sender_restrictions =
        check_sender_access      pcre:/etc/postfix/sender_access

Sample entries to add....

/\.google.com$/         OK
/\.work$/       REJECT

http://www.linuxquestions.org/questions/linux-server-73/how-to-reject-addresses-by-tld-in-postfix-678757/
http://www.postfix.org/ADDRESS_VERIFICATION_README.html

Friday, May 8, 2015

Enabling Tomcat as a systemd service

The simplest way to get Tomcat as a service working on a RHEL 7 / CentOS 7 / other systemd-based setup. Note that I'm not addressing any SELinux or FirewallD considerations here...

1. Make sure you've extracted a copy of Tomcat somewhere, and that you've populated its bin/setenv.sh with at least export JAVA_HOME=/location-of-java & export CATALINA_OPTS="java+tomcat variables" for your environment.

2. adduser -r tomcat

3. Edit /etc/systemd/system/tomcat.service with the following. Make sure you adjust the paths in ExecStart + ExecStop accordingly.

[Unit]
Description=Apache Tomcat Web Application Container
After=network.target
[Service]
Type=forking
ExecStart=/opt/tomcat/bin/startup.sh
ExecStop=/opt/tomcat/bin/shutdown.sh
User=tomcat
Group=tomcat
[Install]
WantedBy=multi-user.target

4. systemctl enable tomcat && systemctl start tomcat

You can get the status of the instance by running systemctl status tomcat -l , and use the "stop" clause to stop the instance.