Sunday, April 25, 2021

Hosting webtrees on NearlyFreeSpeech

Brief notes on setting up webtrees (PHP, MySQL/MariaDB) genealogy software on NearlyFreeSpeech.net Web Hosting
  • Email did not work out of the box (required for user signup, etc): NFS requires sites use sendmail. The default option webtrees uses is "sendmail -bs" which implies an SMTP config, instead use "-t" (see doco)
  • If you have a domain name managed by NFS, you can host any standard type of site from a subdomain very easily when you create it and it will add the DNS record. Then just configure webtrees to use that, eg `base_url="http://webtrees.example.com"` in `data/config.ini.php`
  • Set up HTTPS with Let's Encrypt; NFS provides a CLI script `tls-setup.sh` - the first time I ran it I got an error accessing the `.well-known/acme-challenge/...` file it creates, but the 2nd time it worked fine. Then it auto-upgrades connections to HTTPS, which meant static files didn't load, so I updated the base_url to https in the config file as above. 
  • Pretty URLs: A slight deviation from the documentation. NFS uses Apache 2.4 and apparently without the mod_access_compat module for backwards compatibility (fix found here). The contents of data/.htaccess should be changed to "Require all granted". For the root .htaccess file, I used "RewriteBase /" because webtrees is hosted at the root, and otherwise doesn't work once rewrite_urls="1" is set in data/config.ini.php 
  • Changing the session timeout: default is 7200s (2 hours) but apparently depends on number of page loads, so if you have a quiet site it may not log you out?... don't know why the cookies don't just expire. 
  • Upgrading: As of July 2022 I have successfully completed 2 upgrades without any above changes being reverted, but it's still possible. Just follow the release instructions.
Importing data from other tools:
  • GEDCOM files can be imported (use standard v5.5.1) - if they are small they can be uploaded from the webpage, but beyond some size (maybe 50MB?) they will take a while and then fail to upload, so you have to copy the file to the `data/` folder on the server. 
  • When you start a large import (I had a 755,000 individual file), it shows you progress on the screen, but for some reason it only progresses while you have a tab open - you can't sleep the computer, but you can just keep it in the background. If the computer does go to sleep, if you just browse back to the tree in the Control Panel, it will show the import status and continue. 
  • I had a file that failed to import because of a bad character in a description; the file was valid UTF-8 but inserting this seemed to confuse the SQL statement with "SQLSTATE[22007]: Invalid datetime format: 1366 Incorrect string value: '\xF2\xAC\xA0\xB3 S...' for column `webtrees`.`wt_individuals`.`i_gedcom`..." It appeared a bad copy paste caused this; I opened the GEDCOM file directly in a text editor, found the entry from the value it was inserting, and removed some weird characters, to be fixed up later. 
Workaround to delete a large tree:

I needed to delete the aforementioned large tree after debugging the bad characters, and to import new changes, but the website delete button didn't give any feedback and eventually the database locked up, as it continued to try to delete in the background. I stopped the DB process gracefully via the web interface, with no ill effects except people complained the site was down, and the tree remained untouched. 

Obviously this is beyond the scope of a normal NFS MariaDB instance, but I had been able to observe the delete statements running via phpMyAdmin, yet they were still there afterwards, so it must be running in a transaction. I looked up the source to determine the SQL statements, and ran each or a few at a time (so not in a transaction, but still in order), and it actually worked perfectly and much faster. 

First, determine the tree ID, a.k.a. `gedcom_id` that you want to delete, from the wt_gedcom table. 

Then run these statements for that ID, eg 2 (sorry about the lack of code formatting): 
delete from wt_gedcom_chunk where gedcom_id = 2;
delete from wt_individuals where i_file = 2;
delete from wt_families where f_file = 2;
delete from wt_sources where s_file = 2;
delete from wt_other where o_file = 2;
delete from wt_places where p_file = 2;
delete from wt_placelinks where pl_file = 2;
delete from wt_name where n_file = 2; -- big
delete from wt_dates where d_file = 2; -- big
delete from wt_change where gedcom_id = 2;
delete from wt_link where l_file = 2; -- big one 2.2 mil rows
delete from wt_media_file where m_file = 2;
delete from wt_media where m_file = 2;
delete wt_block_setting from wt_block_setting join wt_block on wt_block.block_id = wt_block_setting.block_id where gedcom_id = 2
delete from wt_block where gedcom_id = 2;
delete from wt_user_gedcom_setting where gedcom_id = 2;
delete from wt_gedcom_setting where gedcom_id = 2;
delete from wt_module_privacy where gedcom_id = 2;
delete from wt_hit_counter where gedcom_id = 2;
delete from wt_default_resn where gedcom_id = 2;
delete from wt_log where gedcom_id = 2;
delete from wt_gedcom where gedcom_id = 2;

These may change over time if tables are added or changed. 

Of course, you should take a database backup first. Adapted from the NFS documentation, running this on the web host will create a gzipped SQL backup of database "mywebtrees" with a datetime stamp:

mysqldump --user=yourusername --host=mywebtrees.db mywebtrees -p | gzip > /home/private/backup-webtrees`date "+%F_%H-%M-%S"`.sql.gz

It would be prudent to copy this to your own backup lest NFS lose it, same as your media files. 



Sunday, March 21, 2021

FreeNAS: Share an arbitrary path via SMB

 By default, FreeNAS/TrueNAS won't let you share a folder that's not part of its managed volumes, giving the error "The path must reside within a volume mount point"which probably means ZFS pools/datasets only. 

This is probably for a good reason, and doing otherwise implies you're doing something outside the scope of the services it is meant to provide, and you could break stuff; this certainly won't be covered by support. Caveat emptor. 

My use case is I have some old USB hard drives I attach to the NAS box so they can be accessed from jails like Plex, but aren't important enough to use precious redundant storage. They're mounted read-only into /mnt, via init commands, and it would be nice to be able to access them directly over the network too, but FreeNAS doesn't want to know about it. 

So, onto breaking stuff. There's a way to trick it: bait and switch. It's really just user interface validation. 

* Create a folder at the path you want to share 

* Set up your share as you like, access the empty folder to test it works

* Delete the folder and replace it with a symlink! 

* Restart the SMB service (if it won't bother others): it seems to lose the path, but on restart it shares as you'd expect! Maybe after a while it would work without restarting. 


I can't vouch for permissions working as expected, especially around write, as I'm only accessing read-only volumes with read-only users in this instance. Mapping SMB users to *nix users is kind of madness anyway. 


I had also tried:

* creating a symlink in my existing volume pool, but it didn't show up in the path browser for creating shares, and had the same error when I entered the path directly. 

* creating a symlink into an existing share: for whatever reason it said I didn't have permissions for the share, though all the files and folders are mounted fully readable, but as a different user. It's possible there's a group permissions tweak to be made somewhere, or it may be SMB refusing to follow symlinks, especially across volumes/devices. 


Update July 2022:

I used this same trick on TrueNAS 12.0 and it still worked. I now have a 10TB external USB3 drive for bulk, non-critical storage and I can copy from the share at > 100MB/s. However there are some caveats. 

It's formatted as ext4 so it will work with Linux and FreeBSD, as TrueNAS still doesn't support writing to NTFS. I first tried exFAT so it would work with Windows too but TrueNAS doesn't appear to support that at all. It only supports ext2fs, and even then writing is quite slow - about 3-5MB/s. 

The tricky part is that this time I want to be able to write to it too, even if only at 3MB/s - mainly just to move things around. However, I get an error:

Error 0x8007054F: An internal error occurred. 

So something is wrong with the permissions, mapping those to ext2, or something. I'll update if I find a solution. 

Further update: I did not find a solution, instead I decided to accelerate plans to abandon FreeNAS/TrueNAS and move to a Linux-based system with full Docker and external drive support. 

I chose OpenMediaVault which is based on Debian, has a lot of the same monitoring that I needed, such as S.M.A.R.T and UPS. Through plugins it also support ZFS so I can mount my zpool - but eventually I will be decommissioning it and shifting to a file system pool such as mergerfs, or just SnapRAID for important files. 

Friday, July 10, 2020

Fix Windows 10 Store error 0x80073CF9

With the release of Windows Subsystem for Linux 2, I wanted to try the new Windows Terminal, so I went to install it via Windows Store. But I kept getting this error Code 0x80073CF9 and none of the Microsoft solutions/Troubleshooters helped. It would download exactly 65 kB and then fail a few seconds later, every time, after many attempts and restarts. 

I hadn't used the Store much but other things had installed fine, like Ubuntu. This is not the only way to install this app, as it's open source, you could build and install it yourself, but why bother? Plus I might want to use Store for something else and didn't want a broken system. 

Long story short, it appears that the Terminal installer requires encryption to be supported (although it's free and open source!?), and somehow mine was not enabled. Even after doing a registry hack, when I restarted it was disabled again. I had to run these two commands, which were hard to track down, to get it to finally stick, and then I could install. 

Instructions: 

  • Open Windows Powershell as Administrator (right-click menu) 
  • Run: Set-ItemProperty HKLM:SYSTEM\CurrentControlSet\Control\FileSystem NtfsDisableEncryption -Value 0
  • Run: fsutil behavior set disableencryption 0
  • Restart
  • Check that it's set properly (0) with: Get-ItemProperty HKLM:SYSTEM\CurrentControlSet\Control\FileSystem NtfsDisableEncryption

Now you should be able to install Windows Terminal and other apps throwing the same error. 

Links I used, which may help if this isn't your solution: 



Sunday, August 25, 2019

Add HTTPS to FreeNAS Nextcloud plugin

Using Let's Encrypt and certbot with auto-renewal.

I installed the Nextcloud plugin some time ago in FreeNAS and found it useful, but it had no support for HTTPS at all. Initially I only used it internally so that was okay-ish, but wanted to roam, and even internally certain integrations just won't work at all without it (WebDAV, see my previous post). So I had a working install and didn't want to have to redo it all, I just wanted to add HTTPS. Of course, it is rare to find a straightforward, and complete answer on the Internet to a specific circumstance, even one that seems like it should be common.

If I wasn't time-poor, or was starting from scratch and was aware of this limitation I'd probably have used this automation script or this guide to hardening instead, or even better, a Docker container with configuration as code, but for that to work I'd have to get Docker working again...

My instructions are partly from https://www.ixsystems.com/community/threads/nextcloud-lets-encrypt-nginx.72643/ but I did NOT allow SSH direct to the nextcloud host, I go through the FreeNAS host using jexec. You could also use the shell in the UI but it's painful.

Prerequisites:
  • A domain name which you can point to your FreeNAS host, perhaps with a subdomain using a CNAME record. 
  • Control over port forwarding to FreeNAS - you'll need to enable port 80 for certbot to work (though I am unsure if it needs to be to the Nextcloud instance, actually) 
  • Basic command line knowledge including how to connect to the specific jail 

Brief steps:
  • Connect to your nextcloud host command line - if you don't know how to do that, this guide is too brief for you, sorry :) 
  • Allow installing packages: vi /usr/local/etc/pkg/repos/FreeBSD.conf and change no to yes. (Recommended to change it back later)
  • You may want to install a more preferable editor than vi at this point. 
  • Install certbot: pkg install py27-certbot
  • certbot certonly --webroot -w /usr/local/www/nextcloud -d your-domain-name.com
  • certbot renew --dry-run
  • crontab -e and add this to regularly check and renew if necessary: 
    • 0 0,12 * * * /usr/local/bin/python2.7 -c 'import random; import time; time.sleep(random.random() * 3600)' && /usr/local/bin/certbot renew --quiet
  • Now you have a certificate and all the automation to keep it up to date!
  • Next we add HTTPS listener and redirect from HTTP - this was missing from other instructions I saw. 
  • edit /usr/local/etc/nginx/conf.d/nextcloud.conf
  • There will just be a server entry for port 80. Break it up like this:
    • server {
    •   listen 80;
    •   server_name _;

    •   return 301 https://$host$request_uri;
    • }

    • server {
    •   listen 443 ssl;
    •   ssl on;
    •   ssl_certificate "/usr/local/etc/letsencrypt/live/your-domain-name/fullchain.pem";
    •   ssl_certificate_key "/usr/local/etc/letsencrypt/live/your-domain-name/privkey.pem";

    •   server_name _;

  • service nginx configtest
  • If the above is ok: service nginx reload
  • https://scan.nextcloud.com/ to check your security 
  • Turn off the package repos again to reduce attack surface. 
It's probably a good idea to upgrade Nextcloud while you're there:

$ pkg update
$ pkg upgrade nextcloud-php71
$ cd /usr/local/www/nextcloud
$ su -m www -c "php ./occ upgrade"


Friday, April 19, 2019

Restricted SMB Share writing files as specific user/group

I have a FreeNAS server and had a need to make SMB (Windows) shares which would write files as a particular user on the server, but be available to only certain unix users. 

Specifically, I installed Nextcloud and wanted to be able to access the files and upload in bulk instead of via the web interface, or the integrated Explorer client which is convenient but only works if you want to synchronise files locally (ie keep a copy) and are happy to move everything there. 

Nextcloud writes files as user/group www/www so I had to have the SMB share write files as that user, but be accessible to my user or group. I realise that isn't the best security or auditing model, and I might have achieved the same with a complex group config between FreeNAS and the jail, but it seemed unnecessary for my home server. 

I also created the group nextcloud_files to define the FreeNAS users that could access this share. 

In short, these advanced auxiliary SMB settings worked:
  • valid users = @nextcloud_files
  • force user = www
  • force group = www
I was able to mount the share as the Windows user and copy data into it, and it appeared as the www user. 

Additionally, I tried to set it so that the shares would only be visible to the users who could access them, with either or both of these options:

  • hide unreadable = yes
  • access based share enum = Yes
But that doesn't seem to work and probably requires user-level settings on FreeNAS which is not available in the web interface, and I don't like invisible manual configuration. 

Nextcloud note: it's not a good idea to side-load files into the storage, although they do appear in the web frontend, it hasn't attributed disk usage properly and I'm not sure if this means certain metadata won't be tracked. There just doesn't seem to be a nice way to pre-load it with GBs of photos.

To update the database, there is a server-side command to re-scan all the files, ie: 


root@nextcloud:/ # sudo -u www php /usr/local/www/nextcloud/occ files:scan joel
The process control (PCNTL) extensions are required in case you want to interrupt long running commands - see http://php.net/manual/en/book.pcntl.php
Starting scan for user 1 out of 1 (joel)
+---------+-------+--------------+
| Folders | Files | Elapsed time |
+---------+-------+--------------+
| 305     | 21041 | 00:21:53     |
+---------+-------+--------------+

This is not ideal. 

Turns out there is also a WebDAV address you can alternatively map instead of a windows share, which would solve this problem! 


... but Windows doesn't like it, no matter whether it's internal, external, port included or not. Of course. 

UPDATE: I solved this and described it in another post

Saturday, September 30, 2017

Fix crackling sound with X-Fi on Ubuntu/Mint pulseaudio

May I never lose this link

https://guh.me/solving-creative-sound-blaster-x-fi-titanium-crackling-slash-distortion-on-linux

The trick is disabling PulseAudio’s timer-based scheduling. Fire up the terminal, then type:
su -c "nano /etc/pulse/default.pa"
Look for the line that contains load-module module-udev-detect, and append tsched=0 to it. Now it should look like this:
load-module module-udev-detect tsched=0
Save the buffer and exit the editor, then restart PulseAudio with the following commands:
pulseaudio -k
pulseaudio --start

Sunday, August 6, 2017

Run a Raspberry Pi and USB hub off one power brick

Annoyed by having to take up so many power plug spaces for supposedly low power devices? 


What's going on here?
  • Raspberry Pi needs minimum 700mA 
  • Power goes into USB hub (2A 5V DC) 
  • Hub provides Pi with power via micro-USB (white)
  • Pi's USB connects to hub for devices 
  • It works, trust me :) just don't use lots of USB devices that need power. 
Ignore the bit at the bottom with the ribbon cable; it's a Nordic radio attached to the GPIO pins for a wireless node project. 

This whole thing is then hung on the wall. Blu-tak holds the hub on the front, and the enclosure has slots on the back to slide it onto screw heads. After a while the hub very slowly slides down and then hangs from the cable; I think I'll have to add tape.