jonny.donker@gmail.com

Pi-Hole: Metrics from a Postgres database

I love Pi-Hole.

It is one of my favourite services that I have running on my home network.

I definitely take Pi-Hole for granted at home and only really appreciate how effective it is when I travel and connect to other networks.

While the function of Pi-Hole is brilliant – my Data warehouse Analyst and reporting background craves for more power from the metrics behind Pi-Hole.

By default Pi-Hole offers a simple dashboard with a few catchy graphs and metrics.  This can be hypnotizing to watch when first installing Pi-Hole; watching shock of how many DNS queries are blocked visiting different  websites.

Behind the scenes is a sqlite database located on the server for more power users to query.

We need more power!

For me – I have a couple of problems with the Pi-Hole dashboard and crave for more functionality and power from what is on offer.

Some issues I have:

  • On my home network – I have two Pi-Hole servers working in a redundancy.  The main Pi-Hole server is sitting on a hypervisor while the backup one is on a Raspberry Pi.  The main rational of this is that if we have a power outage that knocks the hypervisor offline while I am not at home; the backup Pi-Hole server will take over and save an angry phone call from the wife on why she can’t access the internet.

    One of my wishes is to see the combined Pi-Hole metrics across both servers in one dashboard.

  • A couple of items on the network creates a large amount of noise that skews the metrics.  For instance the security cameras use to query a NTP server every second until I routed them through a NTP server at home.  My work computer when it drops off the VPN will flood particular addresses with traffic. 

    I would like to exclude these addresses from the metrics; but not exclude them from the P-Hole functionality.

  • Access to better analytics from the Pi-Hole data. 
    • How about a heat map of blocked queries on devices? 
    • Or weekly trends with a moving average? 
    • Or ability to drill into a time where there is a high amount of blocked traffic to determine the  device and application is behind it? (Answer: My kid playing mobile app games)

Napkin Solution

This is what I am working on at the moment to extract and load the data from my two Pi-Hole servers into a Postgres database to query from.

  1. A NFS share is set up to share out the pihole-FLT.db to a linux server on both Pi-Hole servers
  2. A cron job runs every x minutes to run a python script.  The python scrip will query the pihole-FTL.db to get the records from the last timestamp it ran to the current timestamp in the Pi-Hole database and land the data in to a “landing” schema on the postgres database
  3. A stored procedure will run to transfer and normalise the data into an ods schema
  4. A reporting tool will then query off the ods schema.  In this case I am using Apache Superset; just because it was freely available and the web nature means I can share dashboards to multiple devices without installing client software.

Performance of querying sqlite through a NFS share

Initially I was worried about querying the pihole-FTL.db over the NFS share; thinking it will be too slow to be practical.

I was thinking of a few clunky solutions:

  1. Copy the entire DB over to the postgres host and load it from there
  2. Write a script that runs on the Pi-Hole server to extract the incremental data from the pihole-FLT.db into files and load the files

Thankfully from the volume of traffic we produce; it is not a concern.

The initial load of copying all the data out of the Pi-Hole database does take a long time if there is a large amount of history to transfer across.

But once the initial load is done; small incremental bites of data only take a few seconds

Current state

I have a pilot running on my home network as a proof of concept.

Fundamentally it works fine and I can gleam some interesting metrics from my Pi-Hole servers.

At the moment the security is not great atrocious as I am just using a super username and password throughout the pipeline.

Once I have cleaned up the postgres sql, python code I can share the complete solution.

Qlik Replicate Expressions – Wait. That’s Illegal

A new team will be taking over for out of hours oncall for Qlik Replicate and I tasked with training up a new team to diagnose and resolve problems with QR.

I thought it was a good idea to create some practice QR tasks in our development environment; intentionally put some problems in the tasks and let the new team diagnose the problem in a safe and no pressure scenarios.

I did the obvious ones; like put an incorrect password in an end point, put a typo in a filter criteria.

Looking for another scenario I thought – “Nothing better than a divide by zero error.”

Wait. That’s Illegal – Part 1

I created a simple table; with a field of CURR_BAL = 1000 and ASSET_VALUE = 0 and added a new field call LVR with the expression of:

$CURR_BALANCE / $ASSET_VALUE

Ran a full load expecting lots of nice and nasty error messages for the new team to investigate.

Hmm – no errors.

That’s not what I expected.

All the rows were transferred across and there was no errors or warnings in the log file, no data errors or anything.

I bumped up all the logging to “Trace” and reran the full load.  Once again, no error messages.

Redirecting the output from the task to a kafka topic; I notice the records that had an intentional div/0 scenario was coming through as null.

{
  "magic": "atMSG",
  "type": "DT",
  "headers": null,
  "messageSchemaId": null,
  "messageSchema": null,
  "message": {
    "data": {
      "ACCOUNT_ID": 5,
      "ACCOUNT_NAME": "Elissa",
      "CURR_BALANCE": "10000.00",
      "ASSET_VALUE": "0.00",
      "LVR": null
    },
    "beforeData": null,
    "headers": {
      "operation": "REFRESH",
      "changeSequence": "",
      "timestamp": "",
      "streamPosition": "",
      "transactionId": "",
      "changeMask": null,
      "columnMask": null,
      "transactionEventCounter": null,
      "transactionLastEvent": null
    }
  }
}

Then I remembered – CDC expressions are built on sqlite syntax.  Looking up how sqlite handles div/0 scenarios; by default sqlite will return a null instead of failing the query on a div/0.

Quoting from Sqlite – Difference Between engines

SQLite does not raise arithmetic exceptions (eg. divide by zero, 1/0). SQLite returns a NULL value for 1/0.

Wait. That’s Illegal – Part 2

I got past my div/0 conundrum after the explanation of what was happening and had a look at a “valid” record.

{
    "magic": "atMSG",
    "type": "DT",
    "headers": null,
    "messageSchemaId": null,
    "messageSchema": null,
    "message": {
        "data": {
            "ACCOUNT_ID": 1,
            "ACCOUNT_NAME": "Alice",
            "CURR_BALANCE": "12345.00",
            "ASSET_VALUE": "500000.00",
            "LVR": 0        <------------- Huh?
        },
        "beforeData": null,
        "headers": {
            "operation": "REFRESH",
            "changeSequence": "",
            "timestamp": "",
            "streamPosition": "",
            "transactionId": "",
            "changeMask": null,
            "columnMask": null,
            "transactionEventCounter": null,
            "transactionLastEvent": null
        }
    }
}

Again our LVR value was wrong.

12345.00 / 500000.00 should be 0.02469; instead of 0

A couple of things popped to mind.

It looked like the classic integer division issue that always catches out new (and experienced) programmer.

But that didn’t make any sense since the source fields were both numeric(18,2).

I checked the destination field that I added.

It was set to numeric(18,2), which isn’t precise enough for the LVR value but we still should have got 0.2 instead 0.

OK – QR must not like the “custom” precision of numeric(18,2) for the LVR field. 

I tried forcing the LVR field data type to REAL(4).

Still got the incorrect value for LVR

Maybe for some reason QR /sqlite treats the values coming in as an integer; even though the source data type is numeric(18,2) on the database.

Let’s give this a go:

CAST($CURR_BALANCE AS NUMERIC) / $ASSET_VALUE

^^ Nope

CAST($CURR_BALANCE AS NUMERIC) / CAST($ASSET_VALUE AS NUMERIC) 

^^ Long shot but nope

CAST($CURR_BALANCE AS REAL) / $ASSET_VALUE 

^^ At last success

{
    "magic": "atMSG",
    "type": "DT",
    "headers": null,
    "messageSchemaId": null,
    "messageSchema": null,
    "message": {
        "data": {
            "ACCOUNT_ID": 1,
            "ACCOUNT_NAME": "Alice",
            "CURR_BALANCE": "12345.00",
            "ASSET_VALUE": "500000.00",
            "LVR": "0.0246"  <------------- Finally!
        },
        "beforeData": null,
        "headers": {
            "operation": "REFRESH",
            "changeSequence": "",
            "timestamp": "",
            "streamPosition": "",
            "transactionId": "",
            "changeMask": null,
            "columnMask": null,
            "transactionEventCounter": null,
            "transactionLastEvent": null
        }
    }
}

It works – but forcing a precise of numeric value to a floating point data type REAL leaves me uneasy; especially in a finance world were precision is needed.

I haven’t researched or tested whether this is a risk of generating an imprecise number through a division or not; something I will need to put on my todo list.

Conclusion

My design principle for Qlik Replicate tasks is to keep them as simple as possible.

Grab the data off the source and get it to the destination as quickly as possible.

By design principle I am against putting expressions in QR tasks:

  1. It is not as feature rich as other programming languages 
  2. It hides business rules that adds a black box and complexity to the pipeline

Point 2 was the big one for me.  I didn’t want downstream users messaging me every time there was an abnormality in the data.  With no expressions in the QR tasks – I could rightfully say, “QR just grabs what it finds on the source database – go and talk to them”

Now with the discovery of these little “features” that could come from expressions; and more importantly no error or warnings about these “features”; it gives me even more incentive not to program in exceptions.

QR in my eyes is not an “ETL” tool – it is just a “EL” tool.

If your designers want complex expressions and transformations programmed into their data pipelines – it should be the downstream developers task.

That’s why they get paid the big bucks.


SQL: Quick and dirty comparing result sets

When doing regression testing – I need to prove that the changes that I have done to two queries returned the same result sets.

If result sets are on the same database then you can do a MINUS operator. 

If not – it can be a pain to shift result sets from database to database; especially if they are large.

For quick and dirty testing; I like using the tsql function CHECKSUM and pl sql function ORA_HASH.  It can give you some quick results to test whether a column of data has a high probability of been the same as another column.

To test a result set; I have the following excel formulas saved in my “Handy formulas” notebook:

TSQL
=",SUM(CAST(CHECKSUM("&A1&") AS BIGINT)) AS "&A1
PL/SQL
=",SUM(ORA_HASH("&A1&")) AS "&A1

With a list of column name you can then make an checksum sql statement from Excel

Running this query on both result sets; differences can be found by seeing if there is a difference in the retuned values

Excel & VBA – Concatenating two fields together with formatting

A work colleague asked me to do a “Mate’s rate” job for his team yesterday.

They had a MS Word document with a huge table in it that they needed to convert to Excel to load into an application.

Paraphrased  it looked like this:

My first thought was to copy this table out and paste it into excel to see how it looks:

Two problems:

  1. The cell “Some data” is merged across three rows
  2. Line 1, Line 2, Line 3 are now split across three rows instead of being in the one cell

I wrote a macro to split out the merge cells; but when it came to combining cells for Line 1, Line 2 and Line 3 I ran into a problem.

None of the usual commands and from my quick google searching could combine the cells and retained the formatting.

So commands like:

=CONCAT(B2:B4)

=B2 & B3 & B4

Would not retain the formatting; let alone add in line feed characters to make it a multiline cell

I’m sure there is a better way – but strapped for time I programmed the following subroutine in VBA

Public Sub combineCells(cell1 As Range, cell2 As Range, output As Range)
    Dim formatIndex As Long
    Dim formatOffset As Long
    Dim hyperlink1 As String
    Dim hyperlink2 As String

    hyperlink1 = ""
    hyperlink2 = ""

    ' Check if the cell has a hyperlin; but don't have a text version of the hyperlink in the Value 2
    If cell1.Hyperlinks.Count <> 0 And InStr(cell1.Value2, "https:") < 1 Then
        hyperlink1 = " [" & cell1.Hyperlinks(1).Address & "]"
    End If

    If cell2.Hyperlinks.Count <> 0 And InStr(cell2.Value2, "https:") < 1 Then
        hyperlink2 = " [" & cell2.Hyperlinks(1).Address & "]"
    End If

 
    ' Handling if the first cell is blank.  If so we don't want a LF at the top of the cell
    If Trim(cell1.Value2) <> "" Then
        output.Value2 = cell1.Value2 & hyperlink1 & vbLf & cell2.Value2 & hyperlink2
        formatOffset = Len(cell1.Value2) + Len(hyperlink1) + 1
    Else
        output.Value2 = cell2.Value2 & hyperlink2
        formatOffset = Len(cell1.Value2)
    End If

  
    ' Copies the formatting from cell1 to the final cell
    ' You can add more options to transfer over different formatting
    For formatIndex = 1 To Len(cell1.Value2)
        output.Characters(formatIndex, 1).Font.Bold = cell1.Characters(formatIndex, 1).Font.Bold
        output.Characters(formatIndex, 1).Font.Italic = cell1.Characters(formatIndex, 1).Font.Italic
        'output.Characters(formatIndex, 1).Font.Underline = cell1.Characters(formatIndex, 1).Font.Underline
    Next

  
    ' Copies the formatting from cell2 to the final cell
    For formatIndex = 1 To Len(cell2.Value2)
        output.Characters(formatIndex + formatOffset, 1).Font.Bold = cell2.Characters(formatIndex, 1).Font.Bold
        output.Characters(formatIndex + formatOffset, 1).Font.Italic = cell2.Characters(formatIndex, 1).Font.Italic
        'output.Characters(formatIndex + formatOffset, 1).Font.Underline = cell2.Characters(formatIndex, 1).Font.Underline
    Next

End Sub

Oh boy it runs slow – to combine a couple of thousands cells took half an hour and I had the typical worries that Excel crashed because everything was locked up.

But it worked for the quick task that I was doing

Qlik Replicate – Simple trick for Source file connectors

Quick post on a trick I use in Qlik Replicate

Qlik Replicate has a Source connector to consume flat files from a drive. 

More details can be found on the Qlik website

For change data capture of these files; Qlik will only consume files that have a newer “Last Change Date” than when the task is started.

So if for some reason you have to stop and reload files that are already present; you have to change the “Last Modified Date”

A simple Powershell command that I got from itechtics.com can be used to reset all the dates in a directory of your choice.

Get-ChildItem -force PathToItem * | ForEach-Object{$_.LastWriteTime = ("yyyy-mm-dd hh24:mi:ss")}

So to change all the dates of the files in the directory C:tempload to the current time (as of writing):

Get-ChildItem -force c:\temp\load * | ForEach-Object{$_.LastWriteTime = ("2023-06-08 11:05:00")}

If a Qlik Replicate task is running and watching this directory; it will reload the file my_load.file.csv.

Scam emails impersonating Church’s bulletin email

This is a page for my church to help them recognise scam emails.

I will update this with new information from feedback and new information

What has happened?

Scammers have obtained the distribution list of the church’s email bulletin.  This could be from scammers obtaining a previous email where the recipients were added to the TO list instead of the BCC list.

They are using this list to send scam and phishing emails impersonating the Church’s email address.

This is a targeted scam because:

  • The recipients are older and less tech savvy so they are easier targets
  • There is a trust from emails appearing to come from the church

Please note that the Church’s email account is not compromised and is safe; it is scammers impersonating the email address.

The Church’s email address is protected by a strong password and two factor authentication

How to recognise a Scam email

Computer security software such as Anti-virus, firewalls and spam filters always help – but the biggest protection and risk to your online security is YOU.

Here are some points to help you recognise a scam email.

The sender's address

The email will come through with a title that looks like the Church’s email title like “Bethlehem Church” – but if you expand up the details of the sender – the sender is not from the @lca.org.au address.

Timing of the email sent

Usually I send out the bulletin on either Thursday or Friday during business hours.

If you receive an email at an unusual time (like 5am in the morning) – this is out of characteristics and this email should be considered suspicious.

Email content

The Church bulletin’s email follows a consistent form and is personalised and specific to the congregation and Christian faith.

Any links or attachments in the bulletin will be explained what they are instead of a vague reference that usually used by scam:

For example:

Today delegates at the General Synod of the Lutheran Church of Australia and New Zealand (LCANZ), meeting in Melbourne, directed the church’s General Church Board and the College of Bishops to explore the theological, constitutional and governance issues involved in establishing ‘one church with two different practices of ordination’. 

Read more from Convention of General Synod

 

Is more focused to the church than a vague content like:

just forwarding a few pics (#5 and #9 in particular) http://www.scam_website.com

What do I do when I receive a Scam email?

Most email clients provide a “Mark as spam” function.  If you mark the email as spam; this will help the email servers identify and block future scam emails for you and other recipients.

If this feature is not available in your email client; just delete the email.

I'm still not sure...

If in doubt – don’t open the email.  Treat it as suspicious.

You can forward the email to the bulletin’s email address and I can help you out.

Or alternative see me after church.

Qlik Enterprise Manager – Upgrading from 2021.05 to 2022.11

As a business requirement of developers wanting to take advantage of new features in Qlik Enterprise Manager (QEM); we had to upgrade our system from 2021.05 to 2022.11.

This was not a smooth process and encountered several issues on the way.

First Attempt – Straight Upgrade

Simple to upgrade?

Just download the latest installer from Qlik’s website and kick it off?

Unfortunately when I gave developers keys to our dev QEM box to do the upgrade before Christmas as I was busy with other tasks.  

They gave it a go without a backout plan.  

Couple of days later I tried logging into the QEM console and couldn’t get on.

The AttunityEnterpriseManager service kept restarting every minute and the following errors were in the  C:\Program Files\Attunity\Enterprise Manager\data\logs\EnterpriseManager.log

[START: 2023-03-01 09:40:18 Attunity EnterpriseManager Version: 2022.11.0.335]
1 2023-01-02 09:40:18 [Command ] [INFO ] Executing ServiceRunCommand command.
6 2023-01-02 09:40:19 [Host ] [INFO ] Setting up web host
6 2023-01-02 09:40:20 [Host ] [INFO ] The server will listen on the following location: http://xxxxxxx.xxx.xxx/attunityenterprisemanager
6 2023-01-02 09:40:20 [Host ] [INFO ] The server will listen on the following location: https://xxxxxx.xxx.xxx/attunityenterprisemanager
6 2023-01-02 09:40:20 [Host ] [ERROR] Object reference not set to an instance of an object.
6 2023-01-02 09:40:20 [Host ] [INFO ] Stopping service on error...

I tried the usual desperation tricks of restarting the service and then restarting the box.

In the end I had to uninstall QEM and reinstall 2021.05 as there was no backout plan.

The developers were then subsequently banned from doing anymore work on the QEM upgrade

Second attempt – Two jump upgrade

Next attempt was upgrading to 2022.05 and then jumping to 2022.11.

This time I took a backup of C:Program FilesAttunityEnterprise Managerdata so I could easily backout the change if I had problems.

The upgrade from 2021.05 to 2022.05 went smoothly; but when jumping from 2022.05 to 2022.11 the same error occurred.

I then raised a case with Qlik and also posted on their forum asking if anyone else had come across this problem.

Final Success

With help from Qlik support we developed a upgrade work around.  Qlik said that there is a patch in the works as other users have had the same problem.

If you follow along – note that this method might differ for your system.  In the three upgrades I have done internally; they all differed slightly.

Make sure you TEST and HAVE A BACKOUT PLAN

Upgrade Perquisites

This upgrade was done on a Windows 2019 server.

You will need:

  • Admin rights to QEM’s server
  • Licenses for QEM and Analytics
  • Username/Password that AEM connects to the analytics database
  • Privilege user account for to the analytics database
  • Password for the user that you use to connect QR nodes to QEM
  • A SQLite database client
  • Handy to have – an editor that can validate json format

Method

  1. Log on to QEM console
  2. For each server under the server tab – Right click and go to “Stop Monitoring”
  3. Remote desktop to the QEM server
  4. Stop the AttunityEnterpriseManager service
  5. Back up C:\Program Files\Attunity\Enterprise Manager\data
  6. Run the upgrade to QEM 2022.05
  7. Once the upgrade is done; log onto QEM and ensure that version 2022.05 is working correctly
  8. Back up C:Program FilesAttunityEnterprise Managerdata again
  9. Stop the AttunityEnterpriseManager service again
  10. Run following command from the command line:
    cd "C:\Program Files\Attunity\Enterprise Manager\bin"
    aemctl repository export -f C:\temp\qem_repo_202205.json -w

  11. Script out the postgres database with the following commands:
    cd C:\Program Files\PostgreSQL10\bin
    pg_dump -U <username> -s -C -c <analyticdbname> > C:\temp\analyticdbname.sql

  12. Edit C:\temp\cdcanalyticsp.sql.  Find and replace “<analyticdbname>” with “cdcanalytics_2022_11_p”.  Save the file
  13. Run the following command to create a new temporary database on the postgres server:
    psql -d postgres -U <username> -f C:\temp\cdcanalyticsp.sql

  14. This is where the problem lies in our instance.  Open up C:tempqem_repo_202205.json
    Find any code blocks with “type”: “AttunityLicense” in it and remove the whole code block.
    1. On our dev system it was the section with “name”: “Replication Analytics”, but in prod it was section with “name”: “Replication Management”.  
    2. You may need to experiment to find the correct section
  15. Go to Add/Remove and uninstall Enterprise manager
  16. Once remove – delete the the folder C:\Program Files\Attunity\Enterprise Manager\data
  17. Install QEM 2022.11.  When prompted do not install Postgres
  18. Log into QEM and check that it is up and running correctly.
  19. Stop the AttunityEnterpriseManager service again
  20. Run the following command:
    aemctl repository import -f C:\temp\qem_repo_202205_Fix.json -w

  21. Start the AEM service and check that it is running correctly
  22. Re-add the QEM and Analytics license 
  23. Go to Settings -> Repository connections.
    Enter in username password set the database as cdcanalytics_2022_11_p
  24. Intialize the repository and start the connector
  25. Stop the AttunityEnterpriseManager service again
  26. Open up C:\Program Files\Attunity\Enterprise Manager\data\Java\GlobalRepository.sqlite in an SQLite editor
  27. Browse the table objects
    Look for name = “AnalyticsConfiguration” and modify the json value “cdcanalytics_2022_11_p” to the original analytics database name
  28. Save SQLite database
  29. Start the AEM service and check that it is running correctly
  30. Go to the server tab.  Edit each server and re-add the password.  Click on the button “Monitor this server’s tasks and messages”.  Test the connection and click OK
  31. PVT QEM

If things go wrong

Backout process

  1. Uninstall QEM
  2. Delete the folder C:\Program Files\Attunity\Enterprise Manager\data
  3. Reinstall either QEM 2021.05 or QEM 2022.05
  4. Stop the AttunityEnterpriseManager service
  5. Restore the appropriate backed up data folder to C:\Program Files\Attunity\Enterprise Manager\data
  6. Restart the AEM service

Facebook links + Former Girlfriend and Scam Weight loss pills

I swear I am working hard this close to Christmas!

But I did accidentally check in my Facebook account and saw a suspicious post that I have seen getting around the my feed.

I have seen posts like this before.  A public post with a couple of dozen people tagged in it.  In this case; one of the people tagged in the post was my (very nice) former girlfriend.

I have seen variations of posts like this with sunglasses and shoes; but this seems to be a new type that hit my news feed.

But this one intrigued me.  The picture looks very convoluted with an image transposed over the other one.  If you look carefully you can see tiled floors and other architecture.  I am guessing the author did this to throw off image matching software that FB might use to match known scams or copywrite images. 

Looking down the Rabbit hole

I am not a security or a web design expert but decided to check out the website.

I suspect it is a Facebook worm of some sorts.  

One thing I didn’t want to do when investigating the website is either triggering off the worm propagation process and make matters even worse.

As a precaution I opened up the link in on my home computer in an incognito page with Noscript running.  Last thing I wanted to do was blow up my work computer by opening a malicious link.

Here is the website in all its glory:

The short link resolved to a fuller domain name and I ran a whois check to find out more.

 

This actually surprised me.  Other scam sites that I have checked with whois usually have domain names registered quite recently.

For instance a phishing email I got the other day – the domain name was only a day old when I looked it up in whois.

Checking out the source code behind the site – the actual HTML code was very neat and tidy.  Nice – I like well formed code.

Other things I noted:

  • All the main menu links will jump to a particular spot on the page.  This will link off to another website where you can enter in contact details like name, address, phone, credit card number etc.  I didn’t go in deeper than this to see the response of putting in fake details as this was out of my depth.
  • There were Chinese characters in the html comments.  This by no means implying that people who write comments in Chinese are dodgy – but seems a unusual a “US Today” website would have foreign language comments in it.

Image searching

The website was filled with “Before and After” images and testimonies on their product.

Using Google Lens I searched some of these images

Our dubious website:

Google Lens came up with:

Google Lens found the image on a Dutch city website to a nutritional consultant Stein van Vida Vitaal.  Our subject name has changed from “Gerald” to “Fred”

One more check for the end of the day was our “Before and After” bikini model photo in the comments:

Google Lens returned an Insider article by Emily DiNuzzo about “Before and After Body image Photos”.

The author of this website actually got the photo off Shutterstock that she referenced (link broken) on the website

Summary

A proper web developer and security expert can probably pull apart this website and explain how the malicious part of it work – but I am not at that level.

One of the biggest risks to security is the component that sits between the computer keyboard and the chair.  

For all the diving into the code, checking domains and looking up reused images over the past half an hour; the biggest thing that stood out to me is the language in the Facebook post:

“I use it. I didn’t change my diet or exercise. Photos before and after use”

It doesn’t seem right.  The post is very vague. 

What did she use? Why isn’t there more details?

If she lost a heap of weight; wouldn’t her language be prouder? 

More descriptive?

Wouldn’t she be showing photos of herself instead of some convoluted photo?

Why is my former girlfriend tagged in this post?  She’s doesn’t have the personality to use questionable diet products.  If the poster is her friend – would she really share something in this manner?  If it was for my former girlfriend – why not send it in a private message or a group chat?

Don’t make hacker’s lives easy.  Think before you click.

How to make Toffee Apples (Dentist’s nightmare)

Our kid had his seventh birthday party coming up.

My wife wanted to invite the whole prep class to his party; with the ulterior motive that all the kids in the class will be invited to at least one party for the year.

The sentiment was nice – but the reality was that we had 40+ kids at a party.

I was given the task of making a gift bags.

I wanted to do something different from the usual gift bags that we get in Australia so after a family discussion we decided this is what will be the base of the gift bags:

  • Pokemon cards (you can buy bulk lots on ebay)
  • A custom Geocache location as a treasure hunt
  • Seed bombs
  • A toffee apple

Dangers of Sugar Napalm

Deep frying and molten sugar scares me.  Any liquid that can be heated up hotter than boiling water in my mind is very dangerous (I was burnt by hot soup when I was a kid)

A good YouTube video had some good safety instructions:

  • Wear long pants and shirt
  • Wear enclosed footwear 
  • Have a large container of water on standby so if you get molten sugar on you; you plunge the affected area in the water as quick as possible
I would also keep kids at a safe distance until an appropriate age to help out.

Recipe

Based off the following video with a few modifications

  • I didn’t use cinnamon hard lollies 
  • Added Raspberry essence   

Ingredients

Makes about 20 small toffee apples

  • 20 small “Snacking” Granny smith apples
  • 850g of white granulated sugar
  • 340ml of water
  • 170g of Glucose syrup 
  • 10 drops of red food colouring
  • 10 drops of Raspberry essence (note this might need to be experimented with depending on the strength of the essence  
  • Sturdy sticks for the handle

 

Method

Night before

  1. Wash the apples in very hot water to remove the wax coating
  2. Push the sticks in the base of the apple and ensure they are solidly in place 
  3. Place in fridge overnight to chill

On the day

1. In a heavy based spotlessly clean saucepan; mix together the sugar, water and syrup. 

Pro tip – put the saucepan on the scales and pour the syrup directly in.  If you measure the syrup out in an individual container – you will be forever struggling with a sticky mess of transferring syrup from one container to another 

2. Stir the contents very well to ensure all the water and sugar is mixed thoroughly together

3. Place pan on medium heat on the stove and bring to the boil.  Cover pan with the lid and allow to boil gently for 5 minutes

4. Uncover the pan and insert the sugar thermometer in the pot.  Boil without stirring until the temperature reaches 138°C (280 F)

This step might take a while as first the water have to boil off.  This is why the temperature will stall at 100°C for a while as the water evaporates.  Once all the water has boiled off – the bubbles will change into lager and slower bubbles.

Don’t leave the pot unattended as the temperature rise will happen without warning.

5. When the temperature reaches 138°C; add in the food colouring and Raspberry extract drops.  Distribute the drops in different locations so it is easier for the boiling sugar to mix through.

6. While waiting for the sugar to reach the correct temperature:

  • Take the apples out of the fridge and dry off the condensation.
  • Prepare some trays to put the finished toffee apples on.

7. Once the temperature reaches the “hard crack” stage of 150°C (300 F); remove the pan from the heat.  Wait a short while for the bubbles to subside.

8. Dip an apple in the sugar syrup and quickly but carefully turn it to get the thinnest coat possible on the apple. Allow the toffee apple to drain the excess liquid off before placing it on the prepared tray.

Whether you go right up to the stick or leave a little gap at the top is the dealer’s choice.  

If the sugar syrup is staring to get too sticky – reheat on the stove again to bring it up to temperature

8. Wait until the toffee is cool (should be quick with cold apples) and enjoy.

Notes

  • I think the kids enjoy it as a novelty of something different.  It also brings back nostalgia memories in the older generation.
  • The recipe scales well.  I did a half recipe as a tester and then to make 40 toffee apples we doubled the recipe.
  • A “Candy” thermometer is indispensable for someone starting off making sugary treats.  Well worth the $20 investment
  • To clean the pot afterwards – fill it up in water and bring it to the boil to remove the stuck sugar
  • Once the toffee apples are cooled down and no longer sticky – bag them up or put them in an air tight container.  They should last a couple of days depending on the temperature and humidity – five days is starting to push it.
  • The Glucose syrup can be substituted with honey – although I haven’t tried
  • You can try different flavourings for the sugar syrup – such as almond, strawberry.  I even saw a “Toffee apple” essence in shop that I brought the Raspberry essence from.

Why can’t Qlik Replicate and Enterprise manager be friends?

A nice quiet evening with the family was interrupted with a critical alert coming through my phone.

Once again server patching had knocked off our Qlik Replicate node.

Logging in I could see that Enterprise manager could not reach one of our nodes with the following error message in the log:

2022-12-15 19:15:40 [ServerDto ] [ERROR] Test connection failed for server:MY_QR_CLUSTER. Message:'Unable to connect to the remote serverA connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond xxx.xxx.xxx.xxx:443'.

I have seen this problem before and it is usually resolved by:

  1. Failing over the QR windows cluster
  2. Restarting the new passive node

Our IT team is aware of the problem and have been researching into a cause and a fix.  Their prevailing theory was that when the cluster gets failed over in server patching – there are some residual connections to the previous active node.

But tonight after multiple failovers and stopping and starting the QR roles – still Enterprise manager couldn’t connect to that QR node.

I did the following checks:

  1. The log repsrv.log log file had no error messages and the service Web UI service was active
  2. From the Enterprise manager; I could ping the QR cluster address and the active node successfully 
  3. From a Chrome session on the Enterprise manager server; I could not get to the QR Web console
  4. From a Chrome session on the QR server; I could get to the QR Web console

A senior IT member joined the troubleshooting call and suggested that we reboot the Enterprise manager server.

So we did and then we couldn’t access Enterprise manager console.

At this point I wanted to quit IT and become a nomad in Mongolia.

Then the senior IT member worked it out.

The Windows server was randomly turning on the Windows Firewall

This was blocking our inbound connections; making the console inaccessible from other locations – except when you were logged onto the server.

This also explains why when this problem previously arise; restarting the server will eventually work because the server group policy will eventually get applied and turn off the Windows firewall. 

Lessons learnt

If you come above this problem in your environment try accessing the QR console from multiple locations:

  • From the Enterprise Manager server
  • From within the QR server with a local address like: https://localhost/attunityreplicate/login/

Good luck