Alarm bells! I just realised today that my GitHub Action to deploy my blog site has stopped working. That explained why some recent changes to my projects page involving adding some YouTube videos weren’t reflected on the live site.
On checking my source files, I noticed that I had in fact made the changes to the pages, but for some reason they weren’t in GitHub. Uh? I was sure I pushed them. Well, in fact I did attempt to, but didn’t notice this or at first, when I tried to push once more.
1 | To https://github.com/s-moon/logicalmoon.com.git |
After scratching my head a few times, I worked out what I needed to do. If only I had bothered to read my own blog! Like a prize dingbat, all I needed to do is to recall this post where I explained what I needed to do.
However, it still wasn’t working, so what next? Well, the deployment screen in GitHub had this to say:
“man in the middle attacks”…”offending RSA keys”. It’s enough to make you want to bleugh but after some googling (here and here) plus a dash of experimentation, here’s what you need to do. Note: These steps are based on already having set up GitHub Actions for my server, so your mileage may vary.
We can afford to remove the known hosts completely, so let’s do that.
1 | $ cd ~/.ssh |
OK, we need to be a little more selective. Let’s try to just remove things related to GitHub.
1 | $ cd ~/.ssh |
Removing the IP address will probably yield no change, but I’m adding it just in case. Also, github.com will possibly come back with a variety of IP addresses, so this may be a moot point, anyway.
Once you have done the above two steps, here comes the last one. We’re going to tell our server what the RSA key is for GitHub.
1 | $ ssh-keyscan github.com >> ~/.ssh/known_hosts |
That should hopefully be enough; at least it was for me.
C’est tout, folks.
Hi! Did you find this useful or interesting? I have an email list coming soon, but in the meantime, if you ready anything you fancy chatting about, I would love to hear from you. You can contact me here or at stephen ‘at’ logicalmoon.com
]]>I spent yesterday scribbling in my notepad about how I thought v1 of this system might look but that cribbing I mentioned in my last post was just of a single table with no relationships. In addition, I didn’t think about grave ownership or how I would link to many cemeteries so it couldn’t be right, at least not quite. I needed to beef things up a little.
So, here’s my first thoughts on what the database should be able to represent using 4 main tables: cemetery, grave, image and user.
Each cemetery has a name. Later we might want to add address, images, location etc. but that can wait for now.
Each grave (or memorial) is associated with a person, which is represented as one or more first names plus a surname. Sometimes the actual first name or names aren’t known, so this can be optional or just represented as initials.
That person will be born in a year (sometimes, rarely, unknown) and died similarly in a year.
Some headstones, for instance, give actual dates of the year that the events happened. I’ll need to find a way to differentiate that in the UI but for now I only want to store two dates and not split it into year, month and day etc.
A grave has one owner. That is, the person that added the grave onto the site. On public sites, this tends to lead to all sorts of problems where someone doesn’t agree with how a grave is portrayed, or wants to edit it but can’t. Here, I’m really catering for people that may not want it to be public anyway.
Each grave belongs to (or resides in) a cemetery (or crematorium). I was thinking about this being a requirement, but actually, even though we know a person was either cremated or buried, we don’t always know where. In that case, I’m not sure this will be the best tool to record it, but nonetheless…
A grave may or may not be private. Private will mean that it cannot be found from the main site without the owner either logging in or expressly sharing it. The default will be private.
There will also be a plot. This is the location where you can find the grave and typically takes the form of a letter and numbers. This isn’t always known or shown on the grave, so is optional.
There shall be an inscription (e.g. Here lies…). It may or may not be present.
The position on mother Earth will be obtained using a latitude, longitude and what-3-words label, in case someone uses that to mark locations instead. Again, optional, but ideally, it will be filled in.
By the way, if you’re not familliar with what3words, it’s an alternative way of representing locations on Earth. In this case, it’s a name given to a 3m square area and consists of 3 easy to remember words, separated by a period. What 3 words…get it?
Lastly, each grave is linked to one cemetery (or crematorium). I can’t imagine that being any other way, can you? :-)
Each grave may have zero or more images associated with it.
Each image is associated with one grave. This might be a problem later if we have a picture of a memorial which can consist of many names, not necessarily related. The workaround would be to upload the image again, but I’m not keen. Alternatively, perhaps we chop up the image of names to just show the pertinent one? I’ll need to think about this later.
An image is uploaded by a user. That user owns the image. This in itself is a minefield with many family history groups being quite unhappy about who owns the copyright etc. to uploaded images. My feelings are that they (the uploader) owns them but I can understand why sites want to monetise stuff. People behave as if there are no costs to these free sites, but of course that just ain’t so.
Moving on, an image may be the primary image. That is, the one first shown on the grave page. I did think it might be easier to base it on the earliest image uploaded, but what if you make a mistake. Been there.
A user may register for the site and create no grave entries, but if a grave is created, it must be owned.
Lastly, in addition, all tables will have created_at and updated_at fields consisting of date/times. Oh, and before I forget to mention, I haven’t added in all the fields, especially not Laravel default ones.
So what does this look like as an ERD?
This was produced in Lucidchart in case you’re interested - an excellent free resource if you only create a few diagrams.
I’ve already hinted that I might need to adapt as time goes one, but for a first (second?) draft, it’s a good start.
Next step: create the migrations, throw a few seeded values in and see if it breaks.
Hi! Did you find this useful or interesting? I have an email list coming soon, but in the meantime, if you ready anything you fancy chatting about, I would love to hear from you. You can contact me here or at stephen ‘at’ logicalmoon.com
]]>I had a productive day today re-acquainting myself with Laravel and rebooting this project. Like all beginnings, here was my hello world.
Where to begin? Well, I wanted to start off again with a blank project. Don’t you do that sometimes? Just like when we were back at school? You know what I mean - that new notebook feeling where it’s a new dawn, it’s a new day…
I have chosen to not use Laravel Sail (the docker environment) because it just doesn’t play nicely on my Windows PC. Docker is meant to be so simple, but still, I find Windows won’t work well with it (without a lot of work) and it prevents me from using Oracle Virtual Box at the same time, which I still use. Compose create-project it is.
As you can imagine, straight after that I set a few keys in the environment file and referenced a basic SQLite database. Why not MySQL? It’s a bit overkill for me and the little information I am going to add to it. I’ve also been hearing more and more good things related to SQLite and think it might just (maybe) be fine, even if I do get to store lots of data and have many visitors.
Next, I didn’t want to use any JS framework within this project nor Tailwind. Right now, I want to limit how much extra I need to learn and instead focus on the functionality, but I might come back to that later. I’d of course need some kind of account authentication, so used the Blade version of Breeze which is perfect.
As soon as I could log in and register, I removed the registration parts from the site. When this goes live, I will be the only person logging in for a while, so I didn’t want to deal with spamming etc. There’s a nice video by Amitav Roy on Youtube which shows you one method you can use to do this.
With that in place, I created a seeder to auto-register myself in the user’s table.
Here’s where I cribbed something from the old project. I took the old migration for the grave
table and just used that. One more seeder later, and my ancestor, George Edward Meekums who died in 1915 on HMS Formidable, was forever added to the database.
1 | Grave::create([ |
Now for more essentials. A favicon!
I found my icon here (thanks: fauzicon) and converted it to a favicon with this excellent site.
The .ico file was quickly dispatched to my public folder. A minute later, I altered the main screens to remove the default Laravel logos etc. and add mine.
Here’s where I’m up to so far:
Time for bed.
Hi! Did you find this useful or interesting? I have an email list coming soon, but in the meantime, if you ready anything you fancy chatting about, I would love to hear from you. You can contact me here or at stephen ‘at’ logicalmoon.com
]]>And he’s back.
It’s the summer again and after another burst of family history research, I’m traipsing through graveyards again.
They are funny places. Peaceful, quiet, beautiful in places, but you can’t escape the notion that it’s full of bodies which creates this weird dichotomy. Should you enjoy walking through them, reading the inscriptions and marvelling at the tombstones?
For me, it’s an opportunity to learn more about my family or even other people’s families, through discovering relations, dates of birth, death, etc. But there is a huge problem and I’m not sure it’s simply my bad memory. I keep forgetting how to reach the graves. Seriously.
That’s even if you can find the plot in the first place. My great-grandmother lies in Plumstead cemetery with her two children who unfortunately died as kids. I knew the grave location (it’s usually a code like H123) and the maps at the cemetery showed where the H section was, but it was nowhere to be found.
To be fair, if there is no headstone, it’s much harder, but even with one, it can be tricky to find. In this case I learnt that there was no headstone and it took me about 12 attempts. That is, 12 times visiting the site and hunting for up to an hour each time with my children.
It’s not much better when there is something you can see above ground. Another distant relative was a young girl that also died very young, and is buried in another cemetery. It took me several attempts (this time with my daughter) to find her. Once found (and learning from my mistakes) I made some notes about how to locate the grave again. Even with those, I still find it hard to reach. The trouble is, rows and rows all look the same, and you often end up hunting for landmarks like trees, buidings, or worse, other people’s graves, just to find the one you want!
Anyway, I wanted to create something which was for me and would help me locate all my relatives. There are lots of sites that do this kind of thing, but they have bad UI, require payment, and are often public. To be honest, I am not sure I want my site scraped to death, unless it’s to benefit people who care about the research rather than simply hoard large numbers of memorials.
So, I started a project and got this far.
I was able to view and edit graves, but it kinda stopped there.
Then things happened, time passed, and I left it. Now, like me, it’s back in my mind, so these posts will describe my attempts to revive the project and create that useful resource I always wanted. Meet graverecorder.com.
Follow along if this takes your fancy!
Hi! Did you find this useful or interesting? I have an email list coming soon, but in the meantime, if you ready anything you fancy chatting about, I would love to hear from you. You can contact me here or at stephen ‘at’ logicalmoon.com
]]>I was watching a video by Jeffrey Way on Laracasts and he mentioned something called the “Squint Test“. That’s right…squint test?!
I’d never heard about this before but he described it as a refactoring method which basically works like this.
You look at your code project, squint your eyes by 20% and then try to see any lines which are:
If you find any, you tweak them to improve them.
I did a quick Google and was able to find one more reference here by Jeff Gothelf in 2011, which used the same technique but for the visual design of web pages. The aim in that case was that by squinting, you would see your page as a new user might, and then, could get a feel for the visual hierarchy.
Anyway, don’t do this whilst driving but consider giving it a try.
Hi! Did you find this useful or interesting? I have an email list coming soon, but in the meantime, if you ready anything you fancy chatting about, I would love to hear from you. You can contact me here or at stephen ‘at’ logicalmoon.com
]]>I mentioned in my year end review that I had given a talk at the NCA and as you might expect, I used presentation software to produce some slides. How I wish I had read this book by David Henson first!
I grabbed my copy of Your Sides Suck! from the local library as an after-thought and almost didn’t because of the click-baity name, but I am glad I did.
Your first thoughts on skim reading it would be that there wouldn’t be much information. Lots of images, gaps in text and half-filled pages, but I now think that is what makes it so digestible. I was easily able to pick this up, read a chapter, and come back to it another day.
The book begins by introducing an acronym: A SAMPLE OF RICE which stands for the following -
This makes you think about what you will be talking about, to whom, what their takeaway from it will be and where the presentation will happen. Lastly, it reflects on how long the talk will be and how you will go about it, but what I think was a little more useful is RICE.
He uses this one to guide you into thinking how imagery (diagrams, photos, charts etc.) can actually enhance the slides. The idea is that you really consider whether what you are adding, “adds” to the presentation and in which way.
What I especially loved were the examples where he showed a boring, verbious slide (just like I had produced!!) and replace them with beautiful, impactful slides, that enhanced the information you are trying to give. It made me think of all the supplier presentations I had sat through and how nice (aka professional) they look - that’s exactly what he is trying to help you achieve with this guide.
One other big aspect I will take from this is the idea that you may be better off not using the standard templates that things like Powerpoint direct you towards. I’m talking about those standard slides with a heading at a top, graphics round the side, a fixed area in the middle and so on. Instead, he suggests deleting all of that and creating your own, custom masters. Sounds painful but he shows you exactly how.
Another nice touch is that if you sign up to the author’s website, you can access videos which complement the sections in the book. That’s especially helpful when otherwise he would have to fill pages with instructions which could become boring or try to write about the animations, rather than be able to show you.
All in all, I would buy this if you think there is a chance you might produce any presentations. It will be well worth it and sorely tempts me to re-do my presentation slides!
Hi! Did you find this useful or interesting? I have an email list coming soon, but in the meantime, if you ready anything you fancy chatting about, I would love to hear from you. You can contact me here or at stephen ‘at’ logicalmoon.com
]]>Oh man…I’ve been holding off a little in writing this. I’ve just deleted a bunch of pre-excuses (eek!) so instead, let’s take a peek at my goals for last year, one by one. If you’re interested, here’s what I had to say in 2020, 2019 and 2015.
Well…not too bad. 4 times a month makes 48 posts and I managed 39!
The real winner was when I spent about 30 plus days blogging every day. Had it not been for that, I don’t think I’d have been so prolific, of course. It does show tho, consistency is what wins the race.
My favourite which isn’t finished yet was, and is, going to be the Jest series which started here.
Score: 8/10
I did start a big project, but didn’t quite finish. Here’s a screenshot of my work in progress, so far. You can judge how well I did here :-)
Score: 0/10
Sigh. I watched a metric ton of videos on all sorts of topics, but only created one. That was for a charity I help out at - which isn’t for my blog - so sadly doesn’t count.
Score: 0.05/10
Year | Visitors | Posts | Best Month (Visitors) |
---|---|---|---|
2021 | 9,969 | 39 | May (64) |
2020 | 11,826 | 32 | April (144) |
2019 | 14,809 | 25 | July (226) |
2018 | 11,317 | 33 | December (151) |
2017 | 7,736 | 24 | February (100) |
2016 | 8,792 | 25 | December (151) |
2015 | 7,393 | 63 | July (127) |
With figures like this and a trend that would be perfect for skiing in summer, I should consider monetizing my blog. Let me know if you agree.
I do take heart from the fact that I wrote more posts than I have done ever since 2015, so that’s good.
What about popular (sic) pages?
This gem in 2018 on using Get-ChildItem was popular with almost 1000 views.
My maze post from 2015 is still popular with 686 visits. I remember that - it was fun to do.
Lastly, at #3, I have this on searching for file types, from 2019. Less fun, haha.
Firstly, no goals. Not because I don’t have any (I do!) but because I sound like a broken record with my personal (non-work) achievements. I will amaze and surprise you with my next review, next year.
Now that I sit and think about it, there were a few interesting things that happened last year.
One thing that came out of my blog and my personal connections at work was an opportunity to give a talk at the National Crime Agency. I might talk about that in another post but they were a really friendly bunch and I did enjoy that, for sure.
I also helped edit the book of another of Brent’s (and Spatie’s) courses: Event Sourcing in which I learnt heaps.
On that note, I also contributed feedback with the Manning Review process for a book on OpenAPI named Designing APIs with Swagger and OpenAPI. That’s due out in March 2022 and I should be getting my own copy around about then.
One last book I contributed comments/suggestions to was the Embedded Entrepreneur which is by Arvid Kahl and all about building and using communities when starting a business. Check it out - it’s really good.
Course wise - I took (and passed) the Azure Fundamentals AZ-900 exam in June. This was proctored at home and was a real faff. I can’t remember if I wrote about my experience, but it was stressful, and not because of the content - they really don’t make stuff like that easy.
I think that’s the highlights. Let’s see what 2022 brings.
Hi! Did you find this useful or interesting? I have an email list coming soon, but in the meantime, if you ready anything you fancy chatting about, I would love to hear from you. You can contact me here or at stephen ‘at’ logicalmoon.com
]]>I was watching a recent episode by LaravelDaily and in it, Povilas referred to a nifty little website that I wanted to share with you: Carbon.now.sh.
The basic premise is that you paste code into an editor, it’s tidied, coloured, and then presented in a variety of themes. At that point, you can download and use it as a static image or even tweet it!
That’s my little example above. Take a look and see what you think.
Hi! Did you find this useful or interesting? I have an email list coming soon, but in the meantime, if you ready anything you fancy chatting about, I would love to hear from you. You can contact me here or at stephen ‘at’ logicalmoon.com
]]>I was Googling things to do with the PHP Zend certification and came across one of those dump sites. These are the kinds of places that are supposed to be filled with questions and answers to the exams so that you can cheat at the certifications. I guess if you were a little more generous, you could say that people use them to get an idea of which kinds of questions they ask, but it’s a grey area (for some, not me!).
Anyway, that aside, here’s a question that I read so I figured I would see if I could work out the answer first and then compare it with what they believe it to be. Take a look now and spend a few minutes on it - what do you think the answer is?
1 | What is the output of the following code? |
I’ve left the formatting as-is but did fix a syntax error with a missing bracket. The answer this site gave was…wait for it…D. Um…
If we break it down, let’s see what happens at each step.
Firstly, $x
is set to 1.
When the class C
is instantiated, the value of $x
is incremented. Here, it doesn’t matter if it’s a pre or post-increment, but in this case, it’s pre. So, $x
is now equal to 2.
If the invoke()
function were to be called, that would return the value of $x
, after it had been incremented. At that point, it would be equal to 3.
If the toString()
function were called, it would return the value of $x
after it had decremented it by 1, and then convert the result into a string.
Calling invoke()
and toString()
is going to depend on what happens with the object, so let’s take a look at the code outside of the class, now.
It first creates an object with $obj = new C();
. So, we know that $x
will be equal to 2 because initially it sets the value of $x to 1, and then the constructor increments it.
The next line echos out the value of $obj
, which is of type C
so this is where we hit upon one of those other functions. In this case, that’s the toString()
function since we are echoing the value of the object and when you do that, it will invoke a method of that name to handle the displaying of an object. Remember what that did? It decrements the value of $x
, converts it to a string, and then returns it, which will then be displayed on the screen.
So, we went from a value of 2, back to a value of 1, thanks to toString()
, which means the answer must be B, right?
Well, slow down cowboy. The dump site actually said the answer was D…Eek.
Let’s run this the PHP interpreter to convince any disbelievers.
1 | > php -a |
Ta da. The lesson for today is of course, don’t cheat…or don’t trust the internet…or both.
Hi! Did you find this useful or interesting? I have an email list coming soon, but in the meantime, if you ready anything you fancy chatting about, I would love to hear from you. You can contact me here or at stephen ‘at’ logicalmoon.com
]]>I’ve just spent a frustrating couple of hours trying to set up my own VPN on AWS using the free tier and am writing this to hopefully save someone else some of that pain.
Here’s the shortened version:
Here’s how I solved it for me but note, I am only adding in some snippets which you need to apply in conjunction with the guide below.
To begin, create an account at OpenVPN and purchase a BYOL license. It’s free for 2 users and you don’t need to enter any credit card information.
Make sure you keep the subscription page handy and specifically, the copy key button - you will need that for the installation.
Now move onto this guide which will show you how to create an EC2 market place instance on AWS.
At the point of configuration, you will now need to connect to the instance. For that, I used BitVise, added the key pair to my Client Key Manager and found this guide helpful. Note: the username will be openvpnas
.
After SSHing in and watching the wizard begin, I followed all the defaults in the installation as per the written guide above.
Note: don’t forget to change the password for the openvpn
user again, as per the guide.
Next, when the EC2 instance was running, I connected to the admin URL in my browser and ignored any complaints about SSL certificates. You should have been shown the URL just after you completed the wizard but my output looked a little like this in case you missed it:
1 | During normal operation, OpenVPN AS can be accessed via these URLs: |
From here, there are a few settings you may want to pay attention to in the admin panel.
Make sure you use the hostname rather than the IP address in case it changes. You can find yours on your AWS instance summary page for the instance. For me, in an EU region, it’s something like: ec2-XXX-XXX-XXX-XXX.eu-west-2.compute.amazonaws.com
.
Click Save Settings down the bottom of the webpage.
Note: set this first before you connect any clients, or you will need to re-import the configuration.
Under the VPN settings of the admin screens, make sure you have this setting like so:
This is really important. Just because you changed a few settings doesn’t mean the server knows about them! Make sure you click this at the top of the page, too:
I used this guide for Windows
After installing OpenVPN Connect in the Play Store:
openvpn
as the usernameYou should now be disconnected, but on the Profiles page.
Use a service such as this to see if your IP address has altered. It should have, hopefully.
Lastly, I found this site for a Macintosh program called TunnelBlick really useful for common problems.
Hopefully that will be enough to get you over any bumps should you find any!
Hi! Did you find this useful or interesting? I have an email list coming soon, but in the meantime, if you ready anything you fancy chatting about, I would love to hear from you. You can contact me here or at stephen ‘at’ logicalmoon.com
]]>When debugging a web application in Visual Studio 2017, I kept running into a problem whereby Chrome was caching a CSS file I was using. Here’s a quick tip to stop it happening so that you can focus on your changes, rather than scratching your head wondering why something isn’t changing!
In short, the trick is you can set up the default browser to run with a command line to disable caching, and here’s how.
Find the browser menu in Visual Studio on the menu bar. It will look like this.
Click on Browse With...
Now click on Add...
and you will see the Add Program dialog.
Under each of the settings, add these values, altering the path if your Chrome installation is different.
Program
: C:\Program Files (x86)\Google\Chrome\Application\chrome.exeArguments
: –disk-cache-dir=nullFriendly name
: Chrome without cacheNow Click OK
Once out of that dialog, you can also select the item in the list and set it as the default browser, if you choose.
Finally, click on Cancel
to close the window.
From now on, you should be able to debug/browse without caching being an issue.
Hi! Did you find this useful or interesting? I have an email list coming soon, but in the meantime, if you ready anything you fancy chatting about, I would love to hear from you. You can contact me here or at stephen ‘at’ logicalmoon.com
]]>Using a blend of magic and divviness, on my laptop keyboard I managed to lock my function keys onto the symbols which offer alternative actions like volume and PrtScr etc. Annoying as it was, I had (past tense) no idea how this happened and lived with it for a couple of weeks, until the pressure got too much for me :-)
Fortunately, though, I now know how to to undo (or do!) it:
Fn
key and Escape
, where escape on my keyboard has this symbol which is a padlock with an Fn
inside.To re-do it, do the same again.
I should have known…
Hi! Did you find this useful or interesting? I have an email list coming soon, but in the meantime, if you ready anything you fancy chatting about, I would love to hear from you. You can contact me here or at stephen ‘at’ logicalmoon.com
]]>I’ve often wondered where all of those colour names come from in the CSS spec. Things like SkyBlue and DimGray. Who makes these up and how?
It turns out that no-one really knows but this mailing list reference said that they began in 1986 from MIT with an old X Windowing system (Courtesy of: BoltClock).
That said, tonight I did find out where the colour rebeccapurple (shown in the banner) came from and the story is a little sad. I know I won’t be able to do justice to the story, so read this from Eric Meyer and a small piece in tribute, to find out more of the story.
Hi! Did you find this useful or interesting? I have an email list coming soon, but in the meantime, if you ready anything you fancy chatting about, I would love to hear from you. You can contact me here or at stephen ‘at’ logicalmoon.com
]]>I help monitor and maintain a whole bunch of websites and of course, have a variety of SSL certificates. Usually, I keep an eye on them periodically using calendar items and the odd visit to SSL Checker but it is a bit of a pain.
I know what you are thinking - why don’t I just use a Cron job to automatically renew them? Well, sometimes automations fail and I like to also add in a human element.
In this case, what I really wanted was (another!) automated task that would do it for me, perhaps by sending an email, weekly. This blog post will show you how you can do that too using some Linux commands and the Bash shell.
The main things that we need to do for this are to:
To save you some time, here’s the whole script followed by a break-down of how it works.
1 |
|
If you are using the script as-is, open up a file named: check-ssl-certs.sh
and save the above into it. Don’t forget to set the execute bit on the file with: chmod +x check-ssl-certs.sh
though or you won’t be able to execute it directly from the command line.
To begin, the first block of code handles all of our settings:
1 | hosts=("logicalmoon.com") |
hosts
is a array of all the domains I want to check. If you want to add some more, separate them with a space, and some more quotes like this: hosts=("domain.com" domain2.com")
etc.
port
is the HTTPS port number and will most likely be 443
as I have set it.
tmp_file
contains the path of a temporary file, which will hold our report. mktemp
is great in that it will take care of finding an unused filename we can use, and outputs that filename onto the console. We capture the name by using back ticks into the variable tmp_file
.
Lastly, email_to
is a comma separated list of email addresses of people that should receive the report. Other than hosts
you will also definitely want to change this.
1 | for host in "${hosts[@]}"; do |
This is the main loop of the script and its effect is to go through each domain, one by one, setting host
equal to that particular domain.
"${hosts[@]}"
is a way to expand the array like so:
1 | $ echo "${hosts[@]}" |
Actually finding out the date which the SSL certificate is to expire happens next and for this, I drew heavily on this excellent post by Mohamed Ibrahim.
1 | date=`echo | |
Go read that post to get some of the finer details but my additions are that I am looking for the notAfter
string and stripping out the date, which I add to my date field. This is better explained with some example output. Here’s the code:
1 | $echo | |
And here’s what would happen if we used my domain: logicalmoon.com
1 | $ echo | openssl s_client -connect logicalmoon.com:443 2>/dev/null | openssl x509 -dates -noout |
The date that matters most to me is the notAfter
one since that is my expiry date (err, actually, I’d better keep an eye on that!). Let’s take just that line, excluding the other.
1 | $ echo | openssl s_client -connect logicalmoon.com:443 2>/dev/null | openssl x509 -dates -noout | grep notAfter |
Now we want to use just the date portion, so we need to split the string at the equals sign =
and grab the second field. That’s what cut
does next:
1 | $ echo | openssl s_client -connect logicalmoon.com:443 2>/dev/null | openssl x509 -dates -noout | grep notAfter | cut -d"=" -f2 |
Lastly, we assign the output to my variable date
as we have done before.
The remainder of the loop simply appends the results into a file, and sleeps for a second. My use of \r\n
is because this will end up on a Windows machine and I want the line breaks as part of the output.
Almost done. Here’s the last of the script and my line by line explanation:
1 | mail -s "SSL Certs" $email_to < $tmp_file |
The first line sends a mail with a subject of SSL Certs
to the list of email addresses in $email_to
using our temporary file as the body contents.
We then remove our temporary file - all good scouts clean up after themselves!
Finally, we reschedule this very same script to run at 1pm in a week’s time. This is a cheeky way of ensuring a script keeps executing on a schedule, periodically. Alternatively, I could have used Cron, but this is user level so simpler and requires fewer permissions.
Here’s an example output that I received via email; I’ve masked the real domains I am checking, but everything else is approximately as you would find if you ran this.
1 | Aug 12 09:20:29 2021 GMT domain1.com |
To improve this further, you could:
at
queueAnyway, hope this helps someone should they need to do this.
Hi! Did you find this useful or interesting? I have an email list coming soon, but in the meantime, if you ready anything you fancy chatting about, I would love to hear from you. You can contact me here or at stephen ‘at’ logicalmoon.com
]]>This is a new record for me. Ever since I started writing entries in my blog, way back in 2012, I have never managed to write something every day, let alone for 30 of them. I think the closest I came was in early 2015 when I started to transition from teaching to IT, and had more time, and that’s 6 years ago!
So why did I do it?
Well, firstly, I now have more time and crucially, time to be more creative. But it’s more than that - I do really enjoy sharing things I know or find out, and hope there are people out there who appreciate it. Lastly, I wondered whether I could maintain the daily writing and clearly, I can, at least for this period.
Now, I’m going to continue, of course, but at a slower cadence. I have some other projects I want to work on and am excited about, but before I do, I wanted to share some tips for anyone that wants to do something similar with their blogs.
Let’s begin!
The number one tip I have is to be organised because that takes away some of the friction of writing. In my case, that manifests itself in a number of ways:
Part of my role at work is to solve problems and as you can guess with IT, they can come in all sorts of flavours and areas. Consequently, I am constantly thinking about different things and how to solve them, so whilst I do that, I keep an eye out for topics I can write about. At my job, that manifests itself as process-oriented documentation, but at home, it turns into a blog post, sometimes.
Be curious! I am often wondering why, or how, which leads down all sorts of avenues that I didn’t expect. One quick look at my previous blog articles and you can see that I cover an awful lot of things and for me, that’s great food for my brain.
In addition, some of the things I write about also come from discussions with other people. They can know so much more than you in some facet of IT, so learn from them. If they mention something you don’t know about, study it yourself and then write about it.
Online media is another great source. I learn heaps from Twitter, other people’s blogs, articles etc. Keep consuming so that you can be a producer, too. As an example, on Product Hunt the other day, the number one voted item was a Captcha based on Doom. Really! That got me thinking about making my own one, perhaps with a game JS library. And if I make it, you can bet I am going to write about it, too.
My advice is that when the wind is in the right direction, take advantage of it. By that I mean if you can, write more than one blog article a day because you can bet life, circumstances or plain old tiredness will mean that some days you won’t want to write anything. Having an article or two that you can publish when you didn’t have any spare time will be a God send.
This isn’t as difficult as it sounds because some days you will find you have a little more time than you anticipated. Or, perhaps the blog article you wrote was quicker to do than usual? Maybe you are in the mood for writing more? Take advantage of these tail winds and type up an extra blog post, just in case.
It is really handy to keep a log of any ideas you have. You can use your phone, a notebook, scraps of paper even, but write those ideas down because otherwise, they can be fleeting. I’ve lost count of the fantastic (in my estimation, ahem) ideas I have had when I woke up, and then forgot all about, forever lost.
If you can, flesh them out a little with screenshots, notes, output, URLs, etc. too, because it will make it easier when you do come back to them or are suffering from writer’s (and idea’s) block.
It might be tempting to just keep churning out the articles without reading them but that would be a mistake. It is very easy to make typos and errors so read over your blog articles a few times, and in a few different ways. For instance, I read the Markdown, but also like to see it on the webpage. In doing so, it tricks my brain into reading it more closely.
Another idea is to ask someone else to read it. Twitter is really good for this because there are lots of really kind people about that will chip in if you ask nicely.
I still make mistakes, but I catch an awful lot more and doing this helps me think through any bits that need to be added. It can also give me suggestions for other blog entries, especially where I know there is more to say on a particular topic.
You could perhaps feel that writing for 30 days may seem daunting, but it does get easier. You’re writing will hopefully improve and so will the speed at which you complete the blog posts, so hang in there till the end.
Also, consider breaking up bigger blog articles into parts. I did this with the small JavaScript program to convert weight and the bigger one on the Game of Life and Jest which I am still writing. Had I placed all those in one article, it would have been a chore to write and to read. For some of you, it might still be a chore to read, haha!
I’ve often read that if you are looking for regular readers, don’t do what I do which is write about all sorts! Targetted blogs seem to do better for building up a following, so aim for that if that is your goal.
Lastly, don’t worry about needing to know everything, either. I certainly am not an expert in anything, but I do want to learn and so do many others. I have also found many, many times, that things I thought were easy or that everyone knew, were much harder or unknown to others. Equally, it was the opposite way round from their perspective too. No-one can know everything and that’s probably for the best.
Well, that’s it. I hope that you found this useful and that it may encourage you to start writing. Send me the URL for your blog and you’ll get me reading it at least :-)
Hi! Did you find this useful or interesting? I have an email list coming soon, but in the meantime, if you ready anything you fancy chatting about, I would love to hear from you. You can contact me here or at stephen ‘at’ logicalmoon.com
]]>Firstly, why would you want to do this? We often develop our sites in the browser on our laptops or desktops, and once we get things looking as we hoped, can sometimes forget that not everyone is using the same setup. Therefore, it’s really important to see what you site would look like on a tiny screen…an iPad…or some other mobile device. At times, you can be surprised how different that lovely menu can look, so it is definitely worth doing this.
Here, you can see an example of a device (A Galaxy S9) and how the Pixabay site looks on it.
There’s quite a lot you can do with this beyond the visual dimension, so let’s quickly run through the 4 numbered areas shown.
Chrome is almost exactly the same so you will be able to work out what to do with that without any screenshots. The only difference is that the device icon shown above is on the left, rather than the right side.
Hi! Did you find this useful or interesting? I have an email list coming soon, but in the meantime, if you ready anything you fancy chatting about, I would love to hear from you. You can contact me here or at stephen ‘at’ logicalmoon.com
]]>We came across a weird situation at work which we hadn’t seen before where changes to CSS files weren’t being reflected on our Apache served website. Why? The Apache module: Page Speed.
This isn’t a criticism of that - in fact, it’s often said that using that module can sometimes increase the speed of your site 4x. This is a warning, however, and some tips on how to get round the fact that it will aggressively cache your files, and might catch you out!
Assuming you do have Page Speed installed in your Apache website, you can turn off the whole module and instantly see whether caching is causing your issue.
In my installation, I can find the Apache configuration files at this location: /etc/apache2/
but on other distros, it might be: /etc/httpd
, so experiment with your site.
Starting from the location of your Apache installation, we want to look into the mods-available
folder and if you see the Page Speed configuration file, you have it installed.
1 | $ cd mods-available |
Assuming it is, let’s switch it off.
1 | $ sudo nano pagespeed.conf |
Now look for the line which will probably say ModPagespeed on
and replace on
with off
. Save the file.
Lastly, restart Apache
1 | $ sudo service apache2 restart |
If you visit your site, should Page Speed have been the cause of caching your files, it shouldn’t be now.
Still inside the mods-available
folder mentioned above, you can quickly find out which folder is being used to cache files with this command:
1 | /etc/apache2/mods-available$ grep ModPagespeedFileCachePath pagespeed.conf |
Make a note of that folder because you can use it to empty the cache, later.
This is a handy URL you can use to switch off Page Speed. From what I can tell, it doesn’t empty the cache, but stops some of the bundling that Page Speed does. Try it and see; visit:
1 | https://yoursite.com?ModPagespeed=off |
Once you know where the files are being cached (see Determining the Cache Folder), there is an easy way to tell Page Speed to empty the cache. Either create or refresh a named file in that location, which you can do like this:
1 | $ sudo touch /var/cache/mod_pagespeed/cache.flush |
Remember to replace the folder path with your cache folder, if necessary. touch
, if you wasn’t aware, is a useful command that will either make a file or reset its timestamp to the current date and time.
Assuming Page Speed is running, a short while later (under a minute), and the whole cache should be emptied ready for your to test your website again.
Hi! Did you find this useful or interesting? I have an email list coming soon, but in the meantime, if you ready anything you fancy chatting about, I would love to hear from you. You can contact me here or at stephen ‘at’ logicalmoon.com
]]>I’m currently working on a legacy PHP application which has a variety of partial HTML files using this style of code:
1 | echo 'hello';> |
I think it looks nicer using the short code version, but only for simple statements like the above. Given a choice, I would rather it looked like this:
1 | 'hello';> |
Changing it manually is a bit fiddly and error prone, as you can imagine. Moreover, I was rather surprised that PHPStorm doesn’t already cater for this with it’s reformatting rules which I wrote about in a previous blog post.
There is a solution though: CS Fixer, and this brief blog post will show you how to install it in PHPStorm and automatically correct those kinds of code styles. Once installed, you can of course get it to do much, much more.
There are only a few short steps but let’s start with where we’re going to install the fixer. Begin by changing directory to your project root folder. For this, let’s assume mine is: c:\temp\project
.
Now use composer to install CS Fixer:
1 | $ composer require --working-dir=c:/temp/project friendsofphp/php-cs-fixer |
I’ve chopped the above output a little for brevity at the three elipsis, of course. Running this will produce much more logging.
We’ve got the fixer installed but need to now tell it how to handle those short tags. For this, create a file named: .php-cs-fixer.dist.php
in the project’s root folder. This configuration was based on the one from the original repo and should be placed inside it:
1 | <?php |
There’s a lot of information there, but for now, focus your attention on this part:
1 | $config |
Inside the rule array, we’ve added one which CS Fixer will recognise related to short tags (echo_tag_syntax
). You can find out the full options on this page, but basically, we are choosing to replace the long form tags with the short, and only in cases where we are simply echoing items.
Everything is ready to use - we just need to tell PHPStorm all about it, which is what we’ll do now.
File > Settings...
or Control-Alt-S
Tools > External Tools
+
icon at the top-left of the panel on the right. The following screen (albeit unfilled) will appear:Now set the following fields as shown:
Name
> CS Fixer
Description
> Fixes files automatically
– not essentialProgram
> Path to the CS Fixer program. For me, that’s: c:\temp\project\vendor\bin\php-cs-fixer.bat
Arguments
> --verbose --config=c:\temp\project\.php-cs-fixer.dist.php fix "$FileDir$/$FileName$"
Working Directory
> $ProjectFileDir$
OK
We’re almost done. To make things a little easier, let’s bind some keys to run CS Fixer automatically. In PHPStorm, do the following:
Control-Alt-S
or File > Settings...
Keymap > External Tools > External Tools > CS Fixer
Add Keyboard Shortcut
Control-#
Open up a PHP file which uses the long tag echo format, and press your key combo. An output window will pop-up, followed by a lot of verbose output and then CS Fixer will hopefully do its magic!
Here’s some example output when I ran it last.
1 | c:\temp\project\vendor\bin\php-cs-fixer.bat --verbose --config=c:\temp\project\.php-cs-fixer.dist.php fix C:\temp\project\partial.phtml |
In this example, I installed CS Fixer into the project folder, but you might choose to install it somewhere centrally for all projects. Just adapt the commands above when you run composer
and remember to point the configuration file parameter to somewhere central, too.
Hi! Did you find this useful or interesting? I have an email list coming soon, but in the meantime, if you ready anything you fancy chatting about, I would love to hear from you. You can contact me here or at stephen ‘at’ logicalmoon.com
]]>Your busy. You’ve got stuff to do. Here’s Git aliases in 5 minutes.
That’s right! A colleague recently showed me these and I didn’t know they even existed. Mostly, if I wanted to have an alias (certainly on Linux), I would use the alias
command, but this might be useful if you don’t have that feature on your OS.
You create aliases using git config -global alias.
like this:
1 | $ git config --global alias.co 'commit -m' |
The part after the alias.
is the name of the alias, and the rest of the line is what it will be replaced with. In this example, we’ve created an alias which let’s us do this:
1 | $ git co 'A new commit message' |
As you can see, the co
is replaced with commit -m
, making it equivalent to:
1 | $ git commit -m 'A new commit message' |
It’s also possible to use git aliases to run commands using the exclamation mark character. This is an example:
1 | $ git config -global alias.rgi '!rm .gitignore' |
Very contrived, I know, but this command (rgi = remove git ignore) will delete the .gitignore
from the current directory.
This example is a little more useful. If you are sharing a server account (eek!?) and your colleague and yourself are both using it to commit changes to a repo, you will want to switch identities each time. You could do that with a normal alias, or as in this case, use git. Here’s how:
1 | $ git config -global alias.me '!git config -global user.name "Stephen Moon" ; git config -global user.email stephen@example.com' |
Now, I can switch accounts with this:
1 | $ git me |
Switch out the name an email address with your own, and you will be good to go.
Are you wondering where these aliases end up? Take a look at the file .gitconfig
in your home directory and you can see the entries under an [alias]
section.
Hi! Did you find this useful or interesting? I have an email list coming soon, but in the meantime, if you ready anything you fancy chatting about, I would love to hear from you. You can contact me here or at stephen ‘at’ logicalmoon.com
]]>I’ve previous talked about how you can integrate Editor Config into your workflow with tools like PhpStorm but of course, the IDE is so full-featured, that it can also be used to set standards within your code natively. Let’s explain how to pick a standard for PHP in this blog post.
To begin, fire up PhpStorm with a project of your choosing.
Now press Control-Alt-S (on Windows) to bring up the settings. You can get to them another way under File > Settings...
should you be on another OS or prefer menus.
From here, go to Editor > Code Style > PHP
or the language that you prefer.
Next, click on Set from...
which you can find on the right hand side, near the top.
Next, choose the style you prefer and then click OK
.
I’ve chosen PSR 12
, but you can see that there are options for Laravel, Symfony etc.
If you like, you can also adapt it to suit your preferences. In my case, I went to the PHPDoc
menu and changed some of the options over blank lines etc.
Lastly, find some code and press the following key combination to reformat it based on the code style you selected: Control-Alt-L
or from the menu: Code > Reformat Code
.
No excuses about poor coding style, now!
Hi! Did you find this useful or interesting? I have an email list coming soon, but in the meantime, if you ready anything you fancy chatting about, I would love to hear from you. You can contact me here or at stephen ‘at’ logicalmoon.com
]]>