Thursday, February 9, 2017

C# snippet on easy multi-threading / parallel processing

There's been a need of mine lately to figure out how to successfully execute parallel programming tasks in C#. Finally figured out a basic way to do this, that uses current versions of .NET. The Threading in C# website was key to this.

using System.Threading.Tasks;
// The "i" is an iterator you can refer to in the body
Parallel.ForEach(array, (c, state, i) => {
On an unrelated note: if you have to parse XML for some reason, XElement seems to be much easier to use than the older libraries.

Friday, January 29, 2016

Multi-hosted Nginx on RHEL 7 / CentOS 7 with PHP support

I learned recently that one can construct a pretty effective multi-hosted Nginx + PHP server on CentOS. There are a number of obstacles to deal with, but once handled, the results are promising. This configuration handles a basic wildcard / multi-site setup: I imagine you could could expand it to support HTTPS via Let's Encrypt or wildcard domain certs via additional Nginx config files.

This setup in particular solves two problems: a "wildcard" Nginx setup; and the ability to easily create and delete hosted websites. This draws upon some references, older blog posts, and stuff done at the office. Assume any commands require sudo/root and an SSH terminal to complete.

1. Setup your CentOS VM as you normally would for your environment. If installing FirewallD, be sure HTTP / HTTPS services are open.
2. yum install epel-release pwgen zip unzip bzip2 policycoreutils-python -y : ensure some basic essentials are loaded. Also make sure your favorite editor (vim / nano / whatever) is installed.
3. Install Nginx from their repo.
4. Install PHP 5.6 (5.5 for older stuff) from the Webtatic repo. One deviation: don't run yum install php56w php56w-opcache ; instead, run yum install php56w-cli php56w-opcache php56w-fpm -y for your base install (the original command loads Apache). Don't forget to load any additional PHP modules.
5. Edit /etc/php.ini : set date.timezone to a value per the timezone list, and set upload_max_filesize to a larger value if you're going to be allowing file uploads.
6. Edit /etc/php-fpm.d/www.conf : change listen.owner and to nginx ; listen.mode = 0666user and group to nginx ; pm = dynamic to pm = ondemand ; and set security.limit_extensions to allow .php .htm .html if you're going to run any PHP-in-HTML code.
7. Edit /etc/security/limits.d/custom.conf : add * soft nofile 8192 and * hard nofile 8192 to it.
8. Add the following to the end of /etc/ssh/sshd_config

Match Group nginx
        ChrootDirectory %h
        ForceCommand internal-sftp
        AllowTcpForwarding no
9. Clear out / set-aside the conf files in /etc/nginx/conf.d
10. mkdir /web && mkdir /web/scripts && mkdir /web/sites
11. systemctl enable php-fpm nginx
12. Create all the specified configuration files, then reboot your VM.
13. Start building out your sites using the "create_site" script for each one. At some point, you're going to run into SELinux permissions issues: try the following to mitigate them (you may have to do this twice to identify all the correct policies)...

cd ~
rm php.pp
rm nginx.pp
grep php /var/log/audit/audit.log | audit2allow -M php
semodule -i php.pp
grep nginx /var/log/audit/audit.log | audit2allow -M nginx
semodule -i nginx.pp

Configuration files




server {
    listen       80;
    server_name  _;
    set $site_root /web/sites/$host/public_html;
    charset utf8;
    #access_log  /var/log/nginx/log/host.access.log  main;
    location / {
        root $site_root;
        index index.php index.html index.htm;
    # redirect server error pages ; customize as needed
    error_page  404              /404.html;
    error_page  500 502 503 504  /50x.html;
    location = /404.html {
        root   /usr/share/nginx/html;
    location = /50x.html {
        root   /usr/share/nginx/html;
    # pass PHP scripts to FastCGI server
    location ~ \.php$ {
        root           $site_root;
        try_files $uri =404;
        fastcgi_index  index.php;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        include        fastcgi_params;
    # pass HTML scripts to FastCGI server (legacy code)
    # PHP-FPM config also had to be updated to allow this
    location ~ \.html$ {
        root           $site_root;
        try_files $uri =404;
        fastcgi_index  index.html;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        include        fastcgi_params;
    location ~ \.htm$ {
        root           $site_root;
        try_files $uri =404;
        fastcgi_index  index.htm;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        include        fastcgi_params;
    # === Compression ===
    gzip on;
    gzip_http_version 1.1;
    gzip_vary on;
    gzip_comp_level 6;
    gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript text/x-js;
    gzip_buffers 16 8k;
    gzip_disable "MSIE [1-6]\.(?!.*SV1)";
    gunzip on;


# Probably could wildcard this somehow. Uncomment and use as needed.
#server {
#    server_name;
#    return 301 $scheme://$request_uri;

/web/scripts/create_site (chmod 700)

#Create a user; uncomment next two lines for password generation
useradd -s /sbin/nologin -g nginx -d /web/sites/$1 $1
#NEWPASSWORD=`pwgen -s 16 1`
#echo $NEWPASSWORD | passwd --stdin $1
chmod 755 /web/sites/$1
mkdir -p /web/sites/$1/public_html
chmod 755 /web/sites/$1/public_html
mkdir -p /web/sites/$1/private
chmod 700 /web/sites/$1/private
#Copy default files to show the site works
#(add your logic here: suggest adding robots.txt or humans.txt)
#Reset permissions as needed
chown -R $1:nginx /web/sites/$1
chown root:root /web/sites/$1
#Ensure SELinux access
restorecon -Rv /web/sites/$1
#Show new username + password (uncomment next line)
#echo "$1,$NEWPASSWORD"

/web/scripts/delete_site (chmod 700)

userdel -r $1
rm -fr /web/sites/$1
echo $1 "deleted"


Additional References

Thursday, January 14, 2016

OpenVPN access on Fedora / CentOS / RHEL

SELinux and Avahi conspire to make one's use of OpenVPN on a Redhat-based Linux to be rather unpleasant. Here's how you can go about resolving that.

  • Extract any cert files from the OVPN file you received, and save them as separate files in a directory intended for said purpose.
  • The next three commands require sudo / root user...
  • semanage fcontext -a -t home_cert_t (path to certificate file) for each cert.
  • restorecon -Rv (path of certs/*) to load the new security contexts.
  • yum remove avahi if you use a ".local" or other non-standard domain name internally. A safer option is to use systemctl disable avahi-daemon.socket avahi-daemon.service in case you need to flip it back on later.
  • Import the OVPN file to the Network Manager, and configure to use the cert files + login username + password ("password w/certificates" option).

Wednesday, June 17, 2015

Junking spam email from Postfix queue

So if your  mailq | tail -n 1 shows a lot of requests, and your qshape shows a lot of deferred stuff, its time to nuke some spam backlog. Run the following as sudo ....

mailq|fgrep .science|sed 's/\*.*//'|postsuper -d -
mailq|fgrep .work|sed 's/\*.*//'|postsuper -d -
mailq|fgrep .link|sed 's/\*.*//'|postsuper -d -
mailq|fgrep .club|sed 's/\*.*//'postsuper -d -
mailq|fgrep .ninja|sed 's/\*.*//'|postsuper -d -
postsuper -d ALL deferred

... and any other domains / email addresses after the "fgrep" that look suspicious. pflogsumm is good for getting metrics on repeat offenders. Its tricky to avoid this happening in the first place.

Other References (some outdated)


There is a real issue with the output of the mailq / "postqueue -p" output, in terms of making something usable to check against. Here's a modified example from my logs, regarding a spam message that's failing to be delivered: each entry in the outputted text file has a blank line after; the Perl scripts floating around out there try to accommodate this, but poorly. A Python/Ruby script might work better for this....

91ADA1DDAFC*   12156 Wed Jun 17 13:14:32

Added #2

There's an awesome RHEL / CentOS repo maintained with current Postfix builds. Was able to update 2.3 to 2.11 without immediately borking config files!

Added #3

You can define a PCRE whitelist/blacklist of domains and addresses, and refer to it in You don't have to run "postmap" on this after updating it either.

    smtpd_sender_restrictions =
        check_sender_access      pcre:/etc/postfix/sender_access

Sample entries to add....

/\$/         OK
/\.work$/       REJECT

Friday, May 8, 2015

Enabling Tomcat as a systemd service

The simplest way to get Tomcat as a service working on a RHEL 7 / CentOS 7 / other systemd-based setup. Note that I'm not addressing any SELinux or FirewallD considerations here...

1. Make sure you've extracted a copy of Tomcat somewhere, and that you've populated its bin/ with at least export JAVA_HOME=/location-of-java & export CATALINA_OPTS="java+tomcat variables" for your environment.

2. adduser -r tomcat

3. Edit /etc/systemd/system/tomcat.service with the following. Make sure you adjust the paths in ExecStart + ExecStop accordingly.

Description=Apache Tomcat Web Application Container

4. systemctl enable tomcat && systemctl start tomcat

You can get the status of the instance by running systemctl status tomcat -l , and use the "stop" clause to stop the instance.

Friday, January 16, 2015

mbuffer on FreeNAS + sending a recursive ZFS dataset

So I wanted to follow this procedure for doing a copy of a ZFS filesystem from one FreeNAS box to another. However, mbuffer isn't available for FreeNAS, and the devs aren't planning on adding it either. Fortunately, there is a working FreeBSD port of it available for install.

* Make sure you have SSH enabled on both systems. For this example, I'm assuming you're using the root user, or familiar with sudo users.
* On system #1, logged in via SSH, use wget to download an AMD64 version 9.3 or later copy of the mbuffer package.  At this time, that'd be the mhash- file.
* Also use wget to download the security/mhash package. At this time, that'd be the mbuffer-2014.03.10.txz file.
* Run pkg add -f (name of txz file) for each of the two downloads.
* Repeat the previous steps to download and install the txz files on system #2.

As for the procedure itself, it seems to get hung-up on redirecting the mbuffer output. Fortunately, there's a switch for silent operation. Here is the updated command to send a datapool and its recursive subvolumes to system #2, using SSH from system #1. You'll need to take a zfs snapshot beforehand.

zfs snapshot -r drivepool/dataset@snapshotname

zfs send -R drivepool/dataset@snapshotname | mbuffer -q -s 128k -m 1G | ssh root@system2 'mbuffer -q -s 128k -m 1G | zfs receive -F drivepool/dataset'

Monday, March 24, 2014

A brief test of using case-sensitive filenames on Windows 8 and Server 2012R2

If you load the Services for NFS module on Windows, and set the following reg key, you can enable partial support for non case insensitive (think Linux/Unix) filenames. This was done with Windows 8.1 and Server 2012 R2: I figure it should work like this on 7 or 2012; the functionality has been around for at least 10 years as far as I can tell.

HKLM\SYSTEM\CurrentControlSet\Control\Session Manager\kernel
obcaseinsensitive = 0

Quick observations...

  • On the workstation, one cannot create sensitive filenames directly: locally, or on an SMB share.
  • On the server, sensitive filenames co-exist just fine. They can also be copied by SMB, or downloaded with Filezilla FTP, and still remain sensitive.
  • Case-sensitive files can be copied back to an SMB share on the server, and retain their sensitivity.