Paste into password fields that are blocked

So I find it really annoying that there are websites which block pasting of passwords.

Here's a way around it:

1) Manually (ff)

  • Start the developer tools (F12)

  • click on inspector

  • click on the arrow next to the inspector

  • click on  the password field (to highlight the code)

  • find and remove "onpaste=false"

2) Automatically, each time... (ff)

  • Go to the about://config

  • Enter "" and switch it to false

3) Automatically, each time (Chrome)

  • Use the "Don't F*ck With Paste" extension (with the "u" when searching for it)

oggenc settings with various WAV input files

So I was trying to encode a WAV file into Ogg Vorbis format. I kept getting the following error:

Skipping chunk of type "", length 0
Skipping chunk of type "", length 1635017060
Warning: Unexpected EOF in reading WAV header
ERROR: Input file "FOOBAR.WAV" is not a supported format

I checked the encoding:

> sndfile-info ./FOOBAR.WAV
Length : 18391552
RIFF : 18391544
fmt  : 484
  Format        : 0x1 => WAVE_FORMAT_PCM
  Channels      : 2
Sample Rate   : 48000
  Block Align   : 4
  Bit Width     : 16
  Bytes/sec     : 192000
data : 18391040

Sample Rate : 48000
Frames      : 4597760
Channels    : 2
Format      : 0x00010002
Sections    : 1
Seekable    : TRUE
Duration    : 00:01:35.787
Signal Max  : 18887 (-4.79 dB)

Oggenc assumes that the input will be 44.1k, since this is 48k, we need to tell oggenc to expect a 48k:

> oggenc -b 128 -r -R 48000 -o foobar.ogg FOOBAR.WAV
Encoding "FOOBAR.WAV" to
at approximate bitrate 128 kbps (VBR encoding enabled)
        Encoding [ 0m01s so far] -

Done encoding file "foobar.ogg"

        File length:  1m 35.0s
        Elapsed time: 0m 01.4s
        Rate:         70.5000
        Average bitrate: 108.4 kb/s

CTGIMF007E Object Not Found error in isim / itim

We started getting this error for several erService global ID's associated with several services. There were recons going for these services but the endpoints were not reachable, so the recons got rescheduled internally. We deleted these services through the admin console, since the endpoints were no longer around. Apparently the recon got stuck in limbo. Every ten minutes the scheduler would want to go out and do a recon, but would fail because it couldn't find the service with the error noted above, then it would get rescheduled for 10 minutes.

After going through the typical, where else is this erGlobal id being used... ? No where. But it was reappearing with that error every ten minutes.

So this is kind of a hack, but got the job done. Create a dummy service of any type through the admin console, find the service in the ldap, and change its global id to the missing ID. It will fail, and the scheduler won't reschedule it. Error gone! :)

Exact Steps to fix:

1) create a dummy service (e.g. LDAP Service to keep it simple)
2) save the ldif of that service as a backup
3) change the erGlobalId of that service to the missing erGlobalId direct in TIM LDAP
4) edit the service in the admin console and change something (e.g. service name)
5) wait ten minutes until it runs again: note it might error out due to missing attributes because the original service might not match the service you created. But according to TIM that's an error, and it processed the request, so as far as TIM knows, it's done and won't resubmit it.
6) wait another ten minutes to confirm there are no errors
7) rinse, repeat for other missing erGlobalId's
8) delete the dummy service.

Configuring Postfix to use Gmail SMTP Server and Personal Domain Email

So I followed the most excellent guide for configuring Postfix to use Gmail SMTP server. The following link does a pretty good job of describing the steps for various distributions on How-To Forge: The basic thing is to configure postfix to use a relay server, and authenticate with sasl_passwd file, using TLS to

Once I got that working, everything was great! I would send an email from my client through my server, or using Alpine from the terminal, and Postfix would use Google as the relay. Except for one thing... Google would rewrite my From to my gmail account. This kind of defeats the whole point of using Google as a relay (my ISP blocks port 25, and its relay is shutting down since they're getting out of the email business). I wanted to use my own custom domain email address to send email.

To fix this problem I found the following Google tech note:

Basically, to send as your custom domain you need to “verify” that you actually own that domain to google. This consists of adding a new email address to your gmail account ( giving it your email server's address, port (25/587/465) and your credentials to your account on your email server. Google will email you at that address, through your own server, and send you a verification code. You enter the verification code, and it becomes an address that you can even use within gmail user interface (as a drop down, or default). [Note: it will NOT pull down your emails or anything, it's just to verify that you DO have an account on your own mail server.]

Once “verify”, when you use through your own mail server with Gmail as the relay, it will not rewrite your from address.

Note: I assume that you can add more emails like “postmaster@” or “apache@” by adding more to your gmail account, in case you have an application that needs to send email outside of your own server as itself. But all those system email addresses get routed locally to my own account.

Note: To configure gmail authentication if you have two-factor you will need an “App Password” else “Enable less secure apps” feature.

Migrating from LEAP to Tumbleweed is a Zypp

So after careful consideration, I decided to upgrade my server to Tumbleweed instead of LEAP. I already did this on my desktop, and saw no problems (though minor config updates and the usual stuff was an issue, but expected). Well, I am now doing it on my server because I've seen no adverse problems with laptop/desktop running Tumbleweed, and the added packages are a plus.
Migrating from LEAP 42.1 to Tumbleweed was a cinch. Following their Upgrade Guide ( seems to be spot on. The only caveat (hence this article) was that I had a plethora of ancillary repos for all the crap that didn't come with LEAP. In order for my upgrade to not break dependencies, I had to add a couple of steps.
High level, the steps are:

  1. update all packages (reboot!)

  2. move your current repos out of the way, backing them up

  3. add the default repos

  4. add the custom repos for tumbleweed

  5. zypper dup. Except for step four, the other steps are in the link above.

To resolve step four, I basically took my current custom repos (packman, Mono:Factory, google-chrome; virtualbox) and resolved their Tumbleweed URL versions.
First before you start at all, list out all of your repos with their URL's: zypper lr -u
I paid attention to my custom repos: google-chrome, virtualbox, mono:factory, packman. I took the URL's that they were pointing to (some ending in "42.1" for LEAP) and plugged them into my browser. I went to the parent directory and saw the Tumbleweed version of the repo, and copied that URL. This is going to be the new URL for the repo. So, for example, packman had "" and when I plugged it into my browser, I navigated up to "" then copied the Tumbleweed repo "".
I did this for all of my "extra" repos (google-chrome, and virtualbox would stay the same). I copied and pasted the new URL's into a text file for easy access later. Then I started through the steps in the document listed above. When I got done with step three, I added my custom repos with the same syntax "zypper ar -f -c packman". Then I went on to step five.
My only issues were of conflicting versions from various repositories. The easy way out, without tearing your hair out, is to just remove the installed package. Make note of it, and once you get an operational system, reinstall them if needed. For example, ffmpeg-2.8 (installed) was in conflict with ffmpeg-3.x, I just uninstalled 2.8 "zypper rm ffmpeg-2.8" and jotted it down for reinstallation later. Just get your system up and operational first, then deal with the minutiae later.

Enabling SSL Client Trust for TDI / IDI ... Simply

So you want to connect to that LDAP server via SSL, but don't know or can't make heads or tails of the IBM documentation? Here are a couple of easy steps that took me forever to figure out the right order and location.
First it's assumed that you have TDI V7.1.1 or higher. It's assumed that you're running the ibmditk on the same box as ibmdisrv. When you installed TDI, it asked to select a "Solutions" directory. Make sure you know where that is. If you don't know the default solutions directory go to the TDI_install_dir/bin and see "" for the contents. In my case it's: /opt/IBM/TDI/V7.1.1/bin/ and the solutions directory is TDI_SOLDIR="/opt/IBM/TDI/solutions".

1) download the certificate from the LDAP (or whatever SSL) server you want to connect to. You can easily use a tool like Portecle to do an SSL connect to the server, and save the certificate as a PEM file. For our purposes "foo.pem".
2) start the ibmditk (TDI Console)
3) select "Keymanager"
3.1) open the solutions directory's jks file: /opt/IBM/TDI/solutions/serverapi/testadmin.jks
3.2) the password is "administrator"
3.3) select the dropdown to "signer certificate"
3.4) add the PEM certificate foo.pem
3.5) save the file with the same password, and click OK to overwrite.
4) In the TDI console under Servers, click "STOP Server", wait until it stops and Quit or Restart the TDI Console.
5) Start the TDI Console, and go to "Resources" -> "Connectors"
6) Add a connector for the SSL server you want to connect to. In our case an LDAP server on port 636 as SSL.
7) Fill out the appropriate information, and goto "Input Map" tab -> "Connect" on right.
8) DONE.

Now, I leave it up to the reader to then customize the jks file's password, location, etc. Warning... it's sticky to untangle internals client/server certs.

Python3 and MySQL on OpenSuSE 42.1

So this might be outdated now that 42.2 came out, but I am not ready to upgrade until they get the kinks out. This version is still heavily dependent on a python2 implementation, so if you want to do db development with python3, it's not going to work (afaik - corrections requested).
Best thing to do is to uninstall (if installed) the python-PyMySQL package.
sudo zypper rm python-PyMySQL
Then just use "pip" to install the Python3 version...
sudo pip search mysql
sudo pip install PyMySQL3

It might ask you to update the version or something of pip. I ended up doing so (outside of the zypper/rpm method - perhaps not wise).
Now when I do
user@host:~> python3
Python 3.4.5 (default, Jul 03 2016, 13:32:18) [GCC] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import pymysql;
>>> conn = pymysql.connect(host='', user='root', passwd=None, db='mysql');
>>> help ("pymysql")

Everything seems to work... well, at least no errors. I would like to have kept it down to packages ONLY for the software, but it was more important to be able to use ALL of my machines for this project instead of relying on Tumbleweed machines.

More Eclipse woes, switching to IntelliJ

So I started dabbling in the IntelliJ IDE. I am very impressed so far with their community edition. I would typically use Netbeans whenever possible, and Eclipse when it would run, but Netbeans dropped support for Python, and Eclipse started crapping out at (See bug #1009882) and crashing all over the place on Tumbleweed, so I couldn't use PyDev on Eclipse. After kicking the tires with PyCharm (IntelliJ based IDE) I am very impressed! :) I might actually shell out a few pennies for this. I am going to see if work pays for it first.
They have a community version which is limited, and if you pay you can get "Ultimate" version. Check them out for a bit for Java IDE, Python IDE, and they even support PHP and other languages. It's one of the best IDE's yet.
Note: no, I don't work for them! ;)

Boot problems solved: Tumbleweed with Disk Encryption and Intel video during initrd

So when I installed Tumbleweed on the laptop, everything was working fine with my Skylake mainboard's video card (notably Leap 42.1 did NOT work at all with its Intel Corporation HD Graphics 520 (rev 07)). As soon as I encrypted my home partition, it seems to have stopped being able to boot up completely. It got through the initrd loading, then the screen blanked, and came back, then ... nothing it just sat there. Caps lock didn't work. So I changed the crypto settings to timeout after 15 seconds, and it booted just fine, except that the /home wasn't decrypted or mounted. I had to systemctl to mount the encrypted partition. Every once in a while, however, it would work, and I would get a plain text (in green plain text as a one liner) request on the center of the screen to type in the password.
I figured it was something to do with the initialization of the graphics driver during initrd phase.
So I ended up changing it from "silent=splash" to "silent=no" as a kernel parameter, and that did the trick. It boots every time now, and I can type in the password as normal. BTW, the boot splash screen never worked for me either. I used to get three question marks on the boot screen, with each lighting up (bright green instead of green) from left to right. On my desktop (again Intel) I would get a static image, but at least I knew that when I hit that screen I could type the password in (blind because there was NO password prompt) and it would continue - not on the laptop - not even capslock/numlock on the laptop.