Tell me how you measure me, and I will tell you how I will behave.
— Eliyahu M. Goldratt
With that, defining the _config.yml
couldn’t be easier.
1 | deploy: |
So for now, running npm run deploy
, like I will do shortly with this post, couldn’t be easier. Note: this assumes you have previously setup some SSH keys to do the logging in.
A very in-depth difference and trade-offs of the http/2 and even http/3. The tests really inspire learning more about how each spec changes the way the browser works and operates. I expect to come back to this often as a refresher.
Contain really seems magical. Almost one of those ideas that are obvious in hindsight. Why has it taken so long to get something like this! I’m looking forward to using contain
in most if not all of my future work. Especially that it will contain
the z-index
. That solves many issues. Now it begs the question, How can I break the containment on demand?!
I have to say that I’ve never encountered the ‘sticky hover’ myself. In fact, I’m more used to having the issue of zero hover occurring since the touchscreen has no idea that my finger is hovering! However, I’m all for more media tags to help solve the issue. Even though this is not fully implemented yet, the solution is simple, @media(hover: hover) and (pointer: fine) {}
.
A really cool breakdown of HTTP and how they should be handle via a visual state diagram. Kept up to date with pull requests none the less! Now someone just needs to sell these as posters.
Learning this lesson can take some many years. I think that the author comes to their conclusion in a well thought out way. However, this is something that will need reminding for years to come, especially when you have to ‘teach’ this lesson to a team member that insists on changes that really are semantic and subjective.
Full of detail. Truthfully still not all the way through it. Fascinating to get some behind the scenes on how teams solve tough problems.
I think that I have read and reread this at least a few times in the past week. It’s very impactful and helpful to what I do. These models and the thinking behind them really give someone the edge, assuming that you practice them.
This is really impactful and gives a lot of clarity to the situations I myself have been in. I would expect this rule to be near the top of the list of what needs to be challenged in any software company. Imagine if this behavior was acceptable in anything that could harm someone, like rockets, jets or military equipment!
Luckily there is an easy fix to make sure that not only are the certificates still valid and but also the site is using latest versions of the update software.
To get right to it, here are the all the steps to take. This assumes that Ghost is installed on a Debian based system (Ubuntu for instance) and that Ghost CLI is installed and in use (This is the default when installing Ghost).
1 | # Login as the user Ghost is running as (cat /etc/group and browse for the ghost user) |
Ghost installs the ACME LetsEncrypt tool and then installs a periodic cron-job to keep the certificates updated. However, LetsEncrypt in 2019 September moved where the API would live for this. So now, the cron-job is failing and the certificates are no longer going to be updated. Eventually, this will cause the TLS handshake to break and the sit will fail to load as expected.
What we’ve done above is update the LetsEncrypt ACME tool to the latest version which contains the new API for keeping the certificate up to date. From this point forward, the site should stay encrypted!
You may have seen this message.
If the services you are using are not using certbot then it’s unlikely the rest will help fix this issue. What you should see with the command below is that the version that is installed is less than the candidate version. This means that it’s important to move to the latest version. Ubuntu has a latest version at the time of this recording of 0.27.0### while a service in use from the last couple of years may have the 0.23.0### version (like in the output below). Getting the latest version will also push the services onto the latest ACME v2 protocol.
First, check what version you are running.1
2
3
4
5
6
7
8
9
10drew@geedew:~$ apt-cache policy certbot
certbot:
Installed: 0.23.0-1
Candidate: 0.27.0-1~ubuntu18.04.1
Version table:
0.27.0-1~ubuntu18.04.1 500
500 http://mirrors.digitalocean.com/ubuntu bionic-updates/universe amd64 Packages
*** 0.23.0-1 500
500 http://mirrors.digitalocean.com/ubuntu bionic/universe amd64 Packages
100 /var/lib/dpkg/status
Now install the latest certbot.
1 | drew@geedew:~$ sudo apt install certbot |
And verify again the latest is installed.
1 | drew@geedew:~$ apt-cache policy certbot |
And finally, make sure a dry-run works.1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35drew@geedew:/home/drew:~$ sudo certbot renew --dry-run
Saving debug log to /var/log/letsencrypt/letsencrypt.log
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Processing /etc/letsencrypt/renewal/geedew.com.conf
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Cert not due for renewal, but simulating renewal for dry run
Plugins selected: Authenticator webroot, Installer None
Renewing an existing certificate
Performing the following challenges:
http-01 challenge for geedew.com
Waiting for verification...
Cleaning up challenges
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
new certificate deployed without reload, fullchain is
/etc/letsencrypt/live/geedew.com/fullchain.pem
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
** DRY RUN: simulating 'certbot renew' close to cert expiry
** (The test certificates below have not been saved.)
Congratulations, all renewals succeeded. The following certs have been renewed:
/etc/letsencrypt/live/geedew.com/fullchain.pem (success)
** DRY RUN: simulating 'certbot renew' close to cert expiry
** (The test certificates above have not been saved.)
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
IMPORTANT NOTES:
- Your account credentials have been saved in your Certbot
configuration directory at /etc/letsencrypt. You should make a
secure backup of this folder now. This configuration directory will
also contain certificates and private keys obtained by Certbot so
making regular backups of this folder is ideal.
Success! At this point the domain should be all up to date with the latest ACMEv2 protocol.
1 | console.error node_modules/jest-mock/build/index.js:711 |
After some searches, I’ve found that there really is only one effective way to test this if Vue will throw an error. Arguably though, we would only be testing that Vue is doing what you tell it to be doing (not testing that we wrote a requirement on the property).
1 | it('requires tasks', () => { |
But if you follow This Resource, it’s possilbe to instead, test that there is a ‘required’ attribute test on the props object. So now, instead of testing if Vue works, it will be testing that the code has the required feature! Much better than checking for errors.
The solution seems to be almost the same in both cases.
1 | $> npm config set unsafe-perm true |
What exactly is this doing? Is it ‘unsafe’?
Let’s start with a search. So there is a config setting allowed in the package.json
that actually can set this per package. But wait, it’s always true unless using root. I never run as root
, or with sudo
privileges unless I am forced to. Let’s take a look through the source code
Set to true to suppress the UID/GID switching when running package scripts. If set explicitly to false, then installing as a non-root user will fail.
If npm was invoked with root privileges, then it will change the uid to the user account or uid specified by the user config, which defaults to nobody. Set the unsafe-perm flag to run scripts with root privileges.
1 | function loadUid(cb) { |
In other words, if we set unsafe-perm
to always true, we will stop getting the user and the group the command was ran as and we will over-load the defaults.
1 | 'unsafe-perm': process.platform === 'win32' || |
It’s starting to make a bit more sense. This issue that I’m facing is that the permissions my current user has are unable to create the symlinks that are being asked by the program. By setting unsafe-perm
to true
will force NPM
to attempt to always run within the context of the running script.
I suppose that this isn’t ‘unsafe’ but it does force the package installer to never drop into user and group switching when installing apps. It’s possible then you may end up having the code run as ‘root’ when installing (which could then be considered ‘unsafe’);
This goes all the way back to February 7th, 2011!
Initially this would cause the app to either Error out when false
or to toggle the user and group.
But finally, we reach the reasoning for this code (and still it’s not what many may expect).
You can’t please everyone.
Don’t try to be secure if it’ll fail.
If someone really wants to encourage this all the time, then they can do
npm config set unsafe-perm false
explicitly.In the npm 1.0 future, it will probably require sudo for global
installing no matter what, and use this behavior for local installing.
Since so many people have npm installing in their home dir, requiring
sudo is causing more trouble than it’s worth.
Boom
This was added to help installing files into ‘safe’ locations without sudo
.
So I think that I may have the answer?
]]>Use this sparingly; probably bad to set globally for all running scripts. If the user is running as
root
it may cause the app to force changes that could extend beyond what a running script should have. Never run asroot
and it will betrue
by default. Usesudo
otherwise. It’s not a magic bullet.
Some calls this a ‘sticky’ footer (not to be confused with ‘fixed’).
The easiest solution is to use flex box by wrapping the page with a ‘tall’ container. We can use the body or some div.
Define a class that will cause whatever element to take up as much space as possible.
1 | .is-tall { |
And next add a container that can be placed inside that will ‘grow’
1 | .is-tall-container { |
The appropriate HTML will look like this.
1 | <html> |
Or with <div>
s
1 | <div class='is-tall'> |
Finally, the header will be stuck to the top!
]]>interface
pattern is rarely understood and almost never used in JavaScript. On a large project or in a big team, interfaces
provide a critical abstraction and the maintaining and use of them is something that Architects or Lead will focus on as a tool to keep code quality known and functionality expected. JavaScript does not yet have this basic functionality as it’s not an Object Oriented Programming language. Instead, it leans and most code in the wild is leaning, toward the Functional Programming behaviors of JavaScript. Interface
patterns are still useful and it’s fairly easy to created something in JavaScript that contains the proper abstractions to benefit the code that helps a project.Let’s begin with the ‘why’. The need to interface
patterns, as stated above, may come from technical leads as a tool to form the code to what they expect to be common. More formally, interfaces
provide a blueprint to a common API
between two code bases. A few better definitions can be found on Stack Overflow. Interfaces
are the abstraction of the structure of a class, not the definition of. Using them ‘guarantees’ the object will contain the interface requirements. Many places in code, especially in JavaScript, undefined
checks are required. LoDash’s get
method is one great method that makes this for the code in a common pattern. These checks bleed into software and make changing the software hard.
“Do we have this method and if so, use it”.
“Does the data contain this object, else give an empty array”.
If the structure of the data or object change, these safeguards must all be updated. Worse still, they can not be added and the page may error out. Interface
pattern is one of the best tools to counter this. A not so hard to use example is taking advantage of the extends
property in classes. In this case we want to create a Menu, but we want to make sure that all of our Menu’s have the same method to get the items. Each menu however will have their own items and we will write many tests assuming the getItems
exist. We must make sure all menu’s have this method, else we have to safeguard within all uses.
1 | // MenuInterface.js |
In a way, we have provided a ‘forced structure’ of the Menu. This is the basic interface
pattern that solves one of those major issues of safeguarding. We know that baseMenu.getItems()
will exist and not fail. Further, we can use flow
or typescript
to guarantee the return types on the methods. More JavaScript engineers should take advantage of the patterns, like interfaces
, that provide proven abstractions to problems being solved. They free the code and the developer to think about the logic, not every little implementation detail by positioning the behaviors of the code into well organized and easily understood logic.
WSL creates it’s own users with there own permissions and this is the real crux of the issue. This user will have it’s own access to files and it’s own setup for Git and SSH Config.
The first step within the WSL is to create an SSH config for your user that will use the Windows user’s files for keys.
1 | mkdir -p ~/.ssh/config |
Once in the Vi program (or use nano or whatever you like to edit with) enter the following config.
1 | Host * IdentityFile /mnt/c/Users/WINDOWS_USER_NAME/.ssh/NAME_OF_KEY |
You must replace WINDOWS_USER_NAME
with the name of the account being used in windows. Also, tell the config file the NAME_OF_KEY
that you’d like to share. Usually this is id_rsa
.
Finally, save the new config file and then we must change it’s permissions so that Linux will allow it to be used.
1 | chmod 600 ~/.ssh/config |
We are also able to share known_hosts
so that the servers we are connecting to are in both environments.
1 | touch /mnt/c/Users/WINDOWS_USER_NAME/.ssh/known_hosts |
This creates a symlink with the Windows known_hosts
for better sharing in the system.
Now that ES6 has a finalized module definition (Right now still in draft phase Out of draft phase! And some new ones coming.), I’ve gone through and found some of the things that stick out to me as ‘need to knows’. It’s helpful to keep these few things in mind when working with the new module syntax.
ES6 modules export bindings, not values.
Exporting any primitive will be a ‘live’ value that can and will be changed by the imported module. This is much different than CommonJS or AMD behaviors.
1 | // library.js |
The import of the integer isn’t a pass-by-value situation. You are actually getting a binding to the integer itself; always be aware of this as it will cause many problems when refactoring old code or in new usage as it changes how the logic works. You can read some of the discussion on this topic here.
Returning an object from a module is an anti-pattern.
ES6 modules were designed with a static module structure preferred. This allows the code to be able to discover the import/export values at compile time rather than runtime. Exporting an object from a module allows for unexpected situations and removes the static compilation benefits. Take for instance this code:
1 | // module.1.js |
The default export in this case is an object which is actually a binding, not a value. That a means that after exporting the object, different functions can be added or removed from this export, which will update the actual exported module. However, there is no guarantee that this module used in another manner will output the same added functions since you may not always require the second module from importing. This non-static style exporting is typical in a CommonJS codebase (even NodeJS exports the fs object as default like this) but it begins to break down in ES2015+ modules, especially while it’s required to transpile (Traceur or ES6-module-transpiler for instance) to use them today.
You can however get the same effect by using named exports and statically importing them. This is a much more useful way of handling object exporting and importing (And also useful to see how the AirBnB lint standards deal with this)
1 | import * as _ from 'underscore'; |
It’s important to note that one thing that modern libraries have done to avoid entire imports is to create smaller libraries that can be imported separately for each method. This is highly advantageos, especially for ‘tree shaking’ scenarios.
1 | import _ from 'lodash'; // NO |
If you will have side-effects, separate them and load them in a module with short syntax.
The standard import looks something like “import something from ‘somewhere/else’;”. But what if the module you are importing isn’t actually exporting anything and only used to run code. As you move into modules, you will find at first side-effects are going to happen. For example.1
2
3// ... code
window.myLib = lib;
// ^ side effect occurs when you import this module!
The only alternative is to separate this code into it’s own module.1
2
3
4
5
6
7
8// sideeffects.js
window.myglob = { ... }
window.myglobfunc = function() { ... };
export default null;
//init.js
import sideeffects from 'sideeffects.js';
import moresideeffects form 'moresideeffects.js'
But now you are having to create variables on the import statements; that is not pretty or maintainable. ES6 module syntax has a much better way of doing these imports that aren’t actually setting variables to anything. Basically, import the file without requesting the exports.1
2
3// betterinit.js
import 'sideeffects.js'
import 'moresideeffects.js'
Dropping the variable from
allows to import side-effects without the need to make up variable names that are equal to null.
Attempt to use import default at all times.
Named exports are fine to use and part of the spec, but defaults are preferred and your code will flow better. It will encourage smaller modules that do less and will help keep your code a bit easier to test.
ES6 modules prefer default exports. This was by design. It becomes a code-smell if your files begin to look like bracket central.1
2
3// codesmells.js
import { namedvar1, namedvar2, namedvar3, namedvar4 } from 'poordesignedmodule'
import { anothervar, twovars, orthreevars } from 'anotherpoordesign'
After a dozen of these at the top of a single file, it should become very apparent that your modules are filled with too many functions and are not properly breaking down into smaller modules. It’s not bad to use named imports, but it’s a clear indicator that if all imports are using named imports, modules are doing too much and you risk having more bugs and complexity in them. It’s a good thing to keep in mind as a leading flag of a need to refactor.
Avoid extra syntax if exporting from imports.
It’s really simple to fall into this trap.1
2
3
4import something from 'somewhere/else';
// ... code
var mysomething = something;
export mysomething;
It might look ridiculous, but as you get into larger files, you may forget what’s what and where it’s coming from. You can avoid this by exporting directly from the other file.1
export something from 'somewhere/else';
Some of these are tips, some are tricks; all of them feel new and arguably different to me and it’s good to be aware of them.
]]>NodeJS provides an easy to use fs.rmdir
command that follows the POSIX standard. This unfortunately means that it will error ENOTEMPTY if there is any file in the directory you are attempting to remove. NodeJS doesn’t have an easy way to force the removal, so you have to get fancy.
By far, the easiest, safest and most cross environment approach is to use rimraf whose source code shows nearly 250 lines of premium quality work.
Or if you must, try trash. It’s really neat too.
But, if you are still reading, doing it yourself is easy and a valid option. The following is a synchronous way to handle the deletion of a directory that may not be empty.
I posted this solution on Stack Overflow as well.
I’ve further added an Asyncronous method for removing the directories. While it’s more code (and that means possibly brittle), and I’m afraid I haven’t battle tested it as much, it’s seemingly twice as fast in my rudimentary testing (However, all sorts of factors can affect a test like that).
]]>I was asked by my PTO to help organize a parent ran, lunch hour, fun time with the students. Planned for one day in each of 4 weeks. Parents sign up to teach students and students sign up for classes to attend during their lunch hour to stay out of the cold. Previously, the students signing up and attending one of 20 fun classes at lunch was done within Microsoft Access and was entered into a system by a single individual. While we did have all of those Access files and I’m well experienced with Access, the actual data entry was better suited within Google’s ecosystem. However there is no clear way to create reports for each student, teacher, and parent’s schedules once the data was entered. We needed to have the ability to generate reports based off of a Sheet.
The solution begins with creating template Docs that contain delimited words which would be replaced by cells from the Sheet. Using a simple system of delimiters like ##
to define the variable words. Using left and right delimiters of two octothorpe, I created the necessary formats of the Docs that are needed.
The class schedule I need contains Student data with a page per student. This allows each student to be able to have his or her schedule in there bag or for their parents to see.
It’s best to have unique and repetitive naming of the variable words. In my case, I use all uppercase character that contain no spaces and are prepended with a generic naming scope. Student data begins with STUDENT_
. I also have a possibility of N
sessions per student where the student may have 0 to 4 classes. I called these SESSION_N_
data to easier find and replace. Proper names make the code much easier to deal with; take care in using naming conventions.
Creating reports begins with writing an App Script within the Sheet. App Scripts can be created from within your sheet by finding the Tools menus and choosing App Script Editor.
Once in the editor, you can create methods that will grab the template, create a new document, run through your sheets rows and add to a new document the template filled with the data. This script does quite a bit.
At the top of the file, setting up a mapping to the Sheets cells. Doing this allows to easily update where the variable names in the Doc template matches the key in SPREADSHEET_MAPPING
and the value is the coulmn in the Sheet’s row as we loop through each row.
1 | /** |
The onOpen
method taps into the script editors ability to add a menu item that can be then used to run the app.
Appending to a Doc requires the code to handle what exactly is being appended. It’s best to append a single ‘paragragh’ of text or a ‘table’. This code, so keep it simple, ignores the need for handling other types.
1 | function appendElementToDoc(doc, object) { |
Getting Sheet data is very simplistic. Grab the active Sheet and the active tab, and copy cells based on 2nd row, 1st column through last row and last column. It would make sense to make the beginning variables.
1 | function getSpreadsheetData() { |
Creating a document and adding a document to a folder do the bare minimum in script editor API code to move along.
Generating a student schedule; the magic. Some setup first at the top. Again it would be rudimentary to make some of this variables. We create a new document called ‘Studen Schedules’ with a timestamp here. Then we add this document to the foleder that is a variable; this is relative to the current script that is running (the Sheet you ran it from).
1 | Logger.log('>>>'); |
Now the code must Clone the template and create a new document of the template. For performance, the code loads the new document into a variable memory.
1 | var docid = DriveApp.getFileById(templateid).makeCopy().getId(); |
Next is the looping through the Sheet‘s rows and pulling the content from each cell and replacing the items in the template (which we make another copy of on each loop).
1 | for (var i in data){ |
Mapping the variable above in setup makes this very non-custom and easy.
1 | // quickly loop through and update all mapped variables |
Uh oh, first major performance issue that happens. It’s very easy and possible to push too much content into the Doc at once. As the code is looping, every hundred rows it will push the content to the document and buffer some more. This is a major performance increase to the code.
1 | Logger.log("Appending to the main document "+body.getText()); |
Last, close and save the template and let the user know with the ‘toast’ notification.
1 | doc.saveAndClose(); |
Folder paths are seemingly easy, but I recommend sticking with the relative paths. The issue comes into fruition when the script is shared and others have much different folder paths and depth.
let
, var
, const
ES2015 is only partially supported. In fact the script editor is mainly JS 1.6 with some enhancements. Officially only partial support of 1.7 and 1.8 exists.
It can be tricky to get performance correct. Opening and appending to documents is the most intense. Batching output is highly recommended along with caching the document in memory and making changes there.
Logger works, but it can be hard to find and open. I had to hop between tabs and page refreshes to debug.
It’s actually very difficult to take text Nodes and manage them into another document. For simple single elements, like a paragraph and table, it’s easy. Recommended to use a library of code to handle any significant scripting.
You can’t remove the last paragraph from document. This means if you ‘copy’ the template, it will error if it’s the last item (as that will essentially remove it!). This is why the code above is ‘cloning’ the template into memory.
Google App Script is really fun. The power that exists is very nice and it’s almost thrilling to get that kind of control which allows for some really great automation that can occur. I’ll likely continue at some point finding other automation scripts I can run.
]]>Teams will be made or broken by communication. If communication fails, and trust is abandonded, the team would have been better off if they hired no one.
Having anyone is sometimes better than having no one. But having the right person is better than having the “right now” person.
Have a process to hiring; no really, create one today. It’s incredibly time consuming and hard to grow a team without having something of a flow of what to do when bringing in possible employees. From phone calls, to emails, and 1-1s, details get lost. Have a process to document all of these steps. Teams will only benefit by asking many team members do an interview. Time is the only limitation on how many you want to partake in the interviewing process. 3-5 hours of interiew time is typical. Anything less and the risk of hiring the wrong employee is too high. Teams should consider to have at least 2 intewviewers in the room at any time. In fact, it’s a learning opportunity for everyone in the room everytime it’s done. As a team, always stop after interviews are done and retrospect. Let opinions come out and the team decicion should become obvious. As a boss, however, take care in making a decision that isn’t necessarily what the team thinks they want, but focus on what is needed. The most important question to answer is not ‘Can this person do the job’, it’s ‘Can this person communicate within this team effectively’. It’s critical to understand that the smartest person interview is not always who should be considered. Teams work best when there is trust among the team members, so a team members’ attitude is everything. Teams need to hire those they will work best with and that can do the job, in that order. Teams will be made or broken by communication. If communication fails, and trust is abandonded, the team would have been better off if they hired no one.
The simplest method taken is to place the process squarely within the mind of the new hire.
Teams work best when there is trust.
If these tips helped you or you have more to add, find me @geedew and let me know.
]]>Take care naming tags and branches to keep from confusing Git
Assume there is a master
branch of code and a develop
branch of code. Daily work is in the develop
branch. Moving code into a master
branch creates a release-able set of code. If the team wishes to be able to maintain the release for any period of time, a Tag should be created at the point in which the code has diverged.
You should never name a tag and a branch the same name!
It makes sense in a case like this to use naming conventions on the tags to keep from colliding on the branch names.
1 | git tag develop-v1.0.0 |
Versus when we releasing code from the master
branch.1
2
3git tag release/v1.0.0-rc1
// or
git tag release/v1.0.0
You can find the common parent in git using merge-base
to do a Tag on code from the past.1
2git merge-base branch1 branch2
0f345000facddd090939209dcaef... // etc
If the team is only using master
and develop
collisions with these two branches will be very rare. However, feature branches and release branches bring in much more opportunities.
If a collision has occurred Git will relay that with a message like the following. Assume we have mistakenly created a Tag release/v1.0.0
and a Branch release/v1.0.0
. What will happen if we tell git to ‘checkout’?
1 | git co release/v1.0.0 |
By default, git has chosen the Branch. If we meant to ‘checkout’ the Tag, being more specific is required.
1 | git co refs/tags/release/v1.0.0 |
Notice how we added refs/tags/
. This is what can be found within the .git
folder. The folder structure is the same that is would needed to append. We could also specify refs/heads/
if we wanted the branch.
At this point, any git command can specific enough check out any branch or tag without “ambiguity”, it’s best to rename the branch by creating a new one and deleting the previous branch; or remove and create a new tag so that the two do not have this collision.
]]>Wordpress must be removed.
This site runs within Digital Ocean. That allows for easy maintenance and expansion. But, previously, Ubuntu 15.04 was being used and Docker really wasn’t going to work that well. Not to mention, server setup and deployment is automated with Ansible. Moving to a new DO server was a must, so Ubuntu 17.04 is now in place.
Wordpress is fine. I even recently used it to build out Grace Bible Church Northwest and prior to this, I’ve used it on this site since… 2009?! Yeah, as a long time user and PHP developer, I was in very early on Wordpress. A few things just weren’t working out for me.
I’m really liking the simplicity and usefulness of having a static html website. Hexo, like the popular Jekyll it has a lot in common with, fit the bill of being a Node setup that I was looking for. In development, I can have a quick Node server, while in Geedew.com I’m able to have Nginx hosting the static html.
I’m hoping to deliver small features over time. I’ve cut back and that allows me to work on delivering the small details in quality releases.
Everything changed between the html tags. I was able to convert the logo to an SVG and that also meant the entire color scheme changed as well.
The dates, tags, archive links were all removed. Instead, I wanted to have relative dates to get a feel for the age of a post (3 days ago
, etc). Some of those will come back if and when I decide to add it.
All the colors and design is new. And it will improve over time.
Something comes next, or nothing gets done
In all likely-hood the process that must be taken to get to the first or next step is a know process by those asked to do it. Be it a discovery process, a design process, a development process or an organizational one. Something comes next, or nothing gets done. Processes, or the steps taken to do work, can be improved from the logical outcome of thinking and planning. Anything that has effort can be done more consistently and efficiently with a well thought out process.
How is a planned process recorded? Is it written down? Is it a set of checkboxes that are checked off as work has been completed? Can it be verbal, or simply remembered or a visual cue? Regardless of how a process is communicated, the level of communication can and will determine the impact and integrity. Without a somewhat rigid procedure, getting work done can have unpredictable outcomes with unknown efficiency. So a well thought out and practiced process tends to solidify what can be done, and when and how work will happen at what expected efficiency.
A recorded process is sometimes referred to as ‘the standard’. There may be a notation on a procedure to take in any given situation. This becomes know as the SOP, standard operating procedure. Many times the SOP is considered a learned process that is known by all workers inherently. When approaching a stop sign while driving a vehicle, for instance, the SOP is to stop and look before entering the intersection. Then a learned process is taught to all drivers on what procedure to take when there are other cars. In my experience it is almost ubiquitously known that getting a ‘wave’ from another driver means you can go, even though this is never a taught thing. This is an example of a process that occurs, but is never taught.
It is important to a team of workers to have a process that is not simply known, but one that is taught, documented and improved with regularity. The leaders of the team cannot have ‘waves’ in the process that everyone is expected to understand. New team members especially will be confused and fill fail to do their work. A good leader knows when to ‘wave’ and when to document a rule.
A team will stalemate when too many rules are applied. This can be observed within most bureaucracies. The rules are extremely organized and well documented, but so may procedures are observed in the processes that not one person can be able to understand, let alone follow them. This leads to specialists within the team; which leads to scaling issues. In fact, more procedures and processes will cause the team to lose control of what they are attempting to build. Once this happens, the work that team does will be reliant on another party in a way that impedes the team.
There is not a fine line that determines a trim from an overbearing process. There is, ironically, a process to determine if the larger set of steps within the work being done needs trimmed or grown. This is typically referred to as a retrospective. Time taken with regularity by a team to change and control a process will determine the success or failure of the team.
As a takeaway consider the team processes you’re a part of. Are they well communicated and followed up on with regularity? Can your personal processes be improved with a little bit more focus from these same tools of thought?
]]>One of the first items to not miss or be confused about when starting a typescript project is to set up the tsconfig.json
to contain how to resolve the paths for inclusion in the app being built.
In my case, being reminded to use the node
resolution strategy for importing from the node_modules
folder was a requirement.
1 | { |
Also see some further examples here.
https://github.com/chanon/typescript_module_example
This is a scripting language designed for Windows to automate some usage. A lot can be accomplished by re-mapping the inputs.
While Windows 10 does allow apps to be loaded by the surface pen, however, these can be messed up and are very limited. What if the pen’s single click was “Copy” and the double-click was “Paste”? Something like this is easily achieved with AutoHotkey.
1 | ; remap surface pen button to copy to clip-board |
Simple mappings can be achieved with short-syntax, however the surface pen tends to have some problems with this when turing off the hotkeys. More success can be had by using the full syntax and applying the down and off state.
1 | #F20::Run Onenote ; Single click, Open OneNote |
This is the beginning of using AutoHotkey. It’s been around for many years and has thousands of options that can be scripted including mouse moving and deep Windows interactions. Check out https://github.com/dantheuber/WinTop-AutoHotKey for one of my favorite ways to keep a window on top with a transparency.
]]>An SSOT can allow a lower barrier to entry to the codebase. Take the package.json that many web based teams have accepted over the last 5+ years. This is a SSOT for many things including the name of the software, its version and what packages it depends on. It can also have the correct way to test, share, update, run, build, etcetera, all clearly defined within the one convenient file. This may also be why many are choosing NPM scripts over [insert build tool of choice here].
Another SSOT is having a state within the application like redux (or countless others). Driving the user interfaces in a one-way direction, read only manner allows for much simpler logic in all the right places.
When working in code, look to minimize the sources of truth and the work being done will suddenly become much easier to manage and enjoy.
]]>