Making a custom agen with OpenAI is fun. you can create a bot with a particular purpose quite quickly with just a pre-prompt. its a multi-part process though,

  1. create an assisstant - This can be reused
  2. create a thread that uses an assistant
  3. continue the thread by references the thread id that was generated

open ai keeps threads for a little while, but its proabally not a good persistant storage solution… So whats the solution to storing long threads? chuck it all in a DB?



I launched shibes.lol and its been a success, amazing to see my ShibaEveryHour audience participate and add their shiba photos submissions. Its been so much fun.

This project validate my fullstack cloudflare astro starter:

  • Astro.js web fullstack web framework
  • Cloudflare d1 - storage
  • Cloudflare r2 - images / objects
  • Lucia - authentication


turbo development on shibes.lol . The focus, build a tool to submit shiba photos as quickly as possible. I still keep getting distracted by building non-important things, like basic auth and silly css 🤦.


  • Zinc, daily 10-15mg (zinc to be balanced with Cooper, in ratio of 10:1 because, more Zinc will reduce Copper in your blood)
  • Daily Sunlight c. Vit D/C d. Biotin 30 mcg (only if deficient)
  1. Male Patent Boldness PREVENTION a. Saw Palmetto 320 mg daily on a full stomach b. Finestride 0.25mg 4 times week min (can have erectile dysf so take Rosemary Oil or Castor Oil). Use the spray version of Fin.

  2. REGROW a. Scalp Massage, 5 mins min b. Minoxidil, apply daily min 1-2 times c. Microneedling / Derma Pen 1mm (press in, don’t roll), 1-2 times a week d. Gelatine supplementation (make hair thicker). Gelatin is made from animal collagen such as bones, skin, tendons and ligaments.

  3. AVOID a. Processed foods, smoking & alcohol b. Dehydration, stress, poor sleep, showering with Fluoride


  1. Take Biotin if Deficiency, Minoxidil doesn’t prevent hair loss, Finastride is a must but has side effects. They come in sprays and oils so use those topical ones.
  2. Try natural ways, avoid all oral medication and try topical ways as much as possible.
  3. There are two DHT pathways in our body. Saw Palmetto can only block one pathway and apparently, Finestride blocks the other. But try only Saw Palmetto first and see before you can try the spray version of Finestride because, it seems unavoidable.


  1. Lysine (via grass fed beef, eggs) -make sure you’re getting enough protein)
  2. Iron
  3. Probiotics - having a healthy gut
  4. 4)DHT inhibitors like ZINC, pumpkin seed oil, green tea extract, nettle root
  5. Lower stress
  6. 6)B vitamins https://youtu.be/9yr7A1fOuKM?si=KtNQO8Ohx-wRg_ya


Thinking of more full-stack Cloudflare teething issues. Turns out its really hard to connect to d1 remotely? but how does drizzle do this with their d1 client I wonder? .


displaying r2 image objects as urls the hard way:

const APP_BUCKET = Astro.locals.runtime.env.APP_BUCKET;

const image = await APP_BUCKET.get("local-image", {});

const imageBuffer = await image?.arrayBuffer();

const imageBase64 = btoa(String.fromCharCode(...new Uint8Array(imageBuffer)));

const imageUrl = `data:image/png;base64,${imageBase64}`;

Display images from r2 the easy way.

note: link your custom domain is the preferred method when going to prod as you can use existing Cloudflare caching - https://developers.cloudflare.com/r2/buckets/public-buckets/#custom-domains

transform images with a Cloudflare images and their URL API: https://developers.cloudflare.com/images/transform-images/transform-via-url/

note: you must have a domain to be able to do this, once enable it exposes a sub path cdn-cgi on your domain a.k.a zone e.g https://chiubaca.com/cdn-cgi/image/width=80,quality=75/https://assets.chiubaca.com/PxcxZE6aGI_kLOEM6gq_400x400.jpg



Working on fullstack Astro Cloudflare starter. Trying to integrate Lucia auth with Drizzle and Cloudflare d1. Lucia docs are barebones and imprecise. Luckily this video helped a lot! https://www.youtube.com/watch?v=urAHMyBXM6kÏÍ

Discovering lots of weird quirks with Cloudflare Pages. The hardest thing to reason about is discrepancies between local and dev and production

The main thing to remember is local dev is not a Cloudflare runtime so we need to be conscious of library choices. This is not a bad thing really…

Environment variables are weird:

  • Pushing a local codebase to cloudflare pages, it absorbs the the envs you have specified in your .env.

  • Using a GitHub branch deployment, environment variables specified can’t be access via Astros import.meta.env . Instead they appear to be injected into Cloudflare runtime and can be accessed into the Astro.locals.runtime. this object is available on astro file and middleware. https://docs.astro.build/en/guides/integrations-guide/cloudflare/#cloudflare-runtime

    • to use development secrests use a .dev.vars file.


tsx lets you run any typescript file as easily as

npx tsx ./script.ts


With the absolute failure of delete-fullstack-cloudflare. I’ve been exploring Astro.js as a full-stack web dev solution. Especially with the announcement of Astro Actions, it feels like Astro can stand its ground against the likes of Next.js.

New goal, get all these features working together in Astro:

  • Cloudflare D1 for storage
  • Cloudflare R2 for images
  • Auth (Lucia/ Auth.js)



Writing up notes on delete-fullstack-cloudflare . I want to build on Cloudflare pages with next js so much, but there are just too many gotchas and issues i’m running into. logged the following issues on the next-on-pages repo:

Next to explore, Next.js with SST ? or Astro Actions ?? with cloudflare adapter?


  • A few problems porting an existing next.js to cloudflare pages, works locally, but getting a cryptic error along the lines of :

    Application error: a server-side exception has occurred (see the server logs for more information).Digest: 2354650810

    Cloudflare logs reveal the following error:

"Error: The 'credentials' field on 'RequestInitializerDict' is not implemented.

lots people seeing this error pop in various forms, one thread thats seem related… https://github.com/cloudflare/workers-sdk/issues/2514


Thank you Netlify. You’ve been amazing over the years. But Cloudflare pricing, reliability and general future outlook can’t be denied. The migration of my projects to Cloudflare has begun. notes.chiubaca.com and chiubaca.com have been ported over seamlessly.


high level steps for getting a mockup via their rest API

Get all products: https://api.printful.com/products e.g we’re interested in stickers:

"id": 505,

"main_category_id": 202,

"type": "STICKER",

"description": "These glossy stickers are designed just for you. Give your favorite items, like a notepad, laptop, or phone case an extra glow and personal touch. \n\n• Film type: MPI 3000 Gloss HOP (EU), Promotional Monomeric PVC (US)\n• 0.12″ (0.3 cm) white sticker border \n• Glossy finish\n• Fast and easy application\n• 2–3 year durability\n• Indoor use\n• Blank product sourced from Japan\n\nDon't forget to clean the surface before applying the stickers.",

"type_name": "Kiss Cut Sticker Sheet",

Use id505 to make the next request: https://api.printful.com/products/505

This returns full Printful stock, variants and other useful details about this product. Most importantly keep notes of the variant IDs, eg "id": 12917.

Use the product id again to get printfiles for this product: https://api.printful.com/mockup-generator/printfiles/505

e.g response:

  "code": 200,
  "result": {
    "product_id": 505,
    "available_placements": {
      "default": "Print file",
      "background": "Background"
    "printfiles": [
        "printfile_id": 256,
        "width": 1750,
        "height": 2482,
        "dpi": 300,
        "fill_mode": "fit",
        "can_rotate": false
        "printfile_id": 259,
        "width": 1822,
        "height": 2554,
        "dpi": 300,
        "fill_mode": "cover",
        "can_rotate": false
    "variant_printfiles": [
        "variant_id": 12917,
        "placements": {
          "default": 256,
          "background": 259
    "option_groups": [
      "Holiday season",
      "Lifestyle 2",
      "Lifestyle 3",
      "Lifestyle 4",
      "Spring/summer vibes",
      "Valentine's Day",
    "options": [
      "Christmas 2",
      "Spring/Summer 2",
      "Valentine's Day",
      "Valentine's Day 2"
  "extra": []

we’re now ready to form a mockup generator request: https://api.printful.com/mockup-generator/create-task/505

This is a POST request which need a body with a payload like:

  "variant_ids": [
  "format": "jpg",
  "files": [
      "placement": "default",
      "image_url": "https://clever-stork-292.convex.cloud/api/storage/9fdcfdeb-ee06-47a2-83ff-01184c939d0d",
      "position": {
        "area_width": 1800,
        "area_height": 1800,
        "width": 1800,
        "height": 1800,
        "top": 0,
        "left": 0

this will return a task that can be polled at an interval:

  "code": 200,
  "result": {
    "task_key": "gt-645099290",
    "status": "pending"
  "extra": []

https://api.printful.com/mockup-generator/task?task_key= + {task_key}


figure out out to automate placing an order:

https://api.printful.com/orders POST w/ works:

  "external_id": "sticker-2-4235234213",
  "shipping": "STANDARD",
  "recipient": {
    "name": "John Smith",
    "company": "John Smith Inc",
    "address1": "19749 Dearborn St",
    "address2": "string",
    "city": "Chatsworth",
    "state_code": "CA",
    "state_name": "California",
    "country_code": "US",
    "country_name": "United States",
    "zip": "91311",
    "phone": "string",
    "email": "[email protected]",
    "tax_number": "123.456.789-10"
  "items": [
      "id": 1,
      "external_id": "sticker-item-2",
      "variant_id": 1,
      "quantity": 1,
      "price": "13.00",
      "retail_price": "13.00",
      "name": "Kiss Cut Sticker Sheet",
      "product": {
        "variant_id": 12917,
        "product_id": 505,
        "image": "https://printful-upload.s3-accelerate.amazonaws.com/tmp/71dbbab32afaf7a761f7a28b91ac9268/kiss-cut-sticker-sheet-white-front-6600501f5d2c0.png",
        "name": "Kiss Cut Sticker Sheet"
      "files": [
          "type": "default",
          "url": "https://printful-upload.s3-accelerate.amazonaws.com/tmp/71dbbab32afaf7a761f7a28b91ac9268/kiss-cut-sticker-sheet-white-front-6600501f5d2c0.png",
          "options": [
              "id": "template_type",
              "value": "native"
          "filename": "shirt1.png",
          "visible": true,
          "position": {
            "area_width": 18,
            "area_height": 2400,
            "width": 1800,
            "height": 1800,
            "top": 300,
            "left": 0,
            "limit_to_print_area": true
      "options": [
          "id": "OptionKey",
          "value": "OptionValue"
      "sku": null,
      "discontinued": true,
      "out_of_stock": true
  "retail_costs": {
    "currency": "USD",
    "subtotal": "10.00",
    "discount": "0.00",
    "shipping": "5.00",
    "tax": "0.00"
  "gift": {
    "subject": "To John",
    "message": "Have a nice day"
  "packing_slip": {
    "email": "[email protected]",
    "phone": "+371 28888888",
    "message": "Message on packing slip",
    "logo_url": "​http://www.your-domain.com/packing-logo.png",
    "store_name": "Your store name",
    "custom_order_id": "kkk2344lm"


Being going deep into Stripe, Stripe.js and Stripe elements. The killer feature of Stripe appears to be Stripe elements to let you embedded checkout element anywhere on your website to build custom checkout flows. LemonSqueezy on the other hand only seems to have the Hosted and Overlay flows. Once they have a similar product to elements it could be a real Stripe killer.


AbortController is a web standard for aborting requests via a signal.

This is quite important in React 18 due to the double firing of useEffects in dev mode. We probably want to cancel redundant request when components mount and re-mount.

useEffect(() => {
// Setup our abortController
const abortController = new AbortController();

const getAsyncData = async () => {

  try {
    const res = await fetch(`/api/some-data?query=${queryParam}`, {
      method: 'POST',
      signal: abortController.signal,
    const json = await res.json();  
    const clientSecret = json.clientSecret;
    setStripeState({ state: 'success', clientSecret });
  } catch (e) {
    if (!abortController.signal.aborted) {
        state: 'error',
        message: 'Unknown error',


// if the component unmounts or he queryParam changes we can cancel the fetch request like this:
return () => {

}, [queryParam]);


excellent directory of tailwind animations animation.ibelick - Tailwind CSS animation gallery


  • progress with sticker-ai, payments linked to lemonsqeezy, convex linked to open-ai. ui all linked up to convex. auth linked up to clerk. This is the fastest i’ve every built!


Using convex to build sticker-ai and it’s a lot of fun! Getting into the weeds of how convex works is interesting. Its patterns and principles makes it a delight to work with. Things I like about it so far:

  • all convex logic lives in /convex
    • logic is clearly separated into queries, mutations, actions
  • everything is strongly typed
  • the auto generated api is awesome

things that are confusing:

  • convex actions sort of make React Server Actions redundant
  • Are you even supposed to run convex actions in React actions? context object seems to get lost.
  • i guess working offline is out the question?


  • Turn your Postgres into MongoDB FerretDB

  • Replicate.com is such an amazing platform to using and training AI models. So exciting exploring around and tinkering with fine tunings and existing models. Questions I have though..

    • When to fine tune?
    • When to use the model as is?
    • How to adapt an existing model to produce consistent outputs
  • Open AI also has the ability to fine tune…


Researching more into contracting life and what it entails.

Points from Guide to software engineering contracting in UK (codedeepdives.com)

  • easy demand anywhere between 400 - 500 pounds a day
  • work can range between 6 weeks to 6 months
  • the feeling of job security is subjective
  • set up a IR35 i.e a ltd company!
  • get an accountant!
  • getting a mortgage becomes more complex but there are specialist brokers out there for this
  • get a business bank account! e.g (www.tide.co) or Starline. Monzo also looks good.

Other things i dont understand yet.

  • best way to define work with a employer
  • handling timelines

Notes from GPT-4

Certainly! As a software engineer contractor, organizing your work effectively with clients is crucial for successful project delivery. Here are some essential tips to help you stay on track and maintain a positive client relationship:

  1. Clear Communication:

    • Establish open and transparent communication channels with your client. Regularly update them on progress, challenges, and any changes.
    • Use tools like project management software to track tasks, deadlines, and milestones. This ensures everyone is on the same page.
  2. Detailed Requirements Gathering:

    • Invest time in understanding the client’s requirements thoroughly. Ask questions, clarify doubts, and document everything.
    • Create a detailed scope document that outlines project goals, features, and deliverables.
  3. Agree on Milestones and Deadlines:

    • Break down the project into smaller milestones. Define clear deadlines for each milestone.
    • Regularly review progress and adjust timelines if needed.
  4. Contract and Legal Aspects:

    • Have a well-defined contract that covers terms, payment schedules, intellectual property rights, and confidentiality.
    • Include provisions for change requests and scope adjustments.
  5. Regular Work Reports:

  6. Risk Management:

    • Identify potential risks early on. Discuss them with the client and propose mitigation strategies.
    • Be prepared for unexpected challenges and have contingency plans in place.
  7. Quality Assurance and Testing:

    • Ensure thorough testing of your code. Deliver high-quality work that meets the client’s expectations.
    • Involve the client in user acceptance testing (UAT) to validate functionality.
  8. Documentation:

    • Maintain clear and concise documentation. This includes code comments, user manuals, and technical guides.
    • Document any design decisions, assumptions, and rationale.
  9. Feedback and Iteration:

    • Encourage client feedback throughout the process. Be open to making necessary adjustments.
    • Iterate based on feedback to improve the final product.
  10. Professionalism and Boundaries:

    • Set professional boundaries. Clearly define working hours, response times, and availability.
    • Maintain a positive and respectful attitude even during challenging situations.

Remember, every client is unique, so adapt these principles to suit the specific project and client dynamics. Building strong client relationships will lead to repeat business and referrals in the long run! 🌟

Learn more

1arc.dev2teamwork.com3bamboogroup.eu4geeksforgeeks.org5careerkarma.com6computer.org7en.wikipedia.org8ryrob.com9fsb.org.uk10thedevbazaar.com11zeroin.me12indeed.com+10 more

Paperwork templates: Free Contract Templates and Agreements (Word or PDF) | Signaturely


Looking into installing Stable Diffusion locally and steps seem to have gotten easier two main options right now that offer a nice GUI over SD is :

Comfy UI looks more powerful but AUTOMATIC1111 seems to be an easier install. Trying the latter for now.

The Top 11 Open GeoParquet Datasets: Making big geospatial data easy - Matt Forrest - Modern GIS and Geospatial Ideas and Guides


Not used Netlify functions in a while and it’s improved a lot! Including first class support for TypeScript.

I was able to build a proxy server to append additional headers to a image response very quickly. ArrayBuffers and Blobs still confuse me quite a bit in Node.js land.


  • Kind of shame there is no auth-js example for Astro yet, but looks like it’s planned)

  • css challenge. would like to try and building an icon to scroll to top .
    • it should only appear when your scrolling up
    • it should get gett bigger but stop getting bigger after it gets to certain size
    • no js


Learning more about Git Submodules) . Its meant to be used to link git repo together if they’re dependencies of each other.

Add a submodule to an existing git repo is as straight forward as this command

git submodule add https://github.com/chaconinc/DbConnector

This seems perfect for chiubaca-monorepo where I need to link it to my notes repo. The last piece of the puzzle is if I can get a sync working between that repo and my astro codebase. I have not been successful with symbolic links on windows.

Astro.js has a built in mechnism to manipulate markdown via rehype plugins. These are functions that manipulate markdown AST which makes it straightforward to build your own ones. I’m doing this in chiubaca-mono to convert my obsidian image and markdown reference so paths are corrected when published to my Astro websites.


Happy new year!

Feeling optimistic that this is the year one of side-projects can turn into a real side-hustle.


v2 of chiubaca.com and notes.chiubaca.com is live all styled up! Still a few minor issues and features i’d like to get in place but its 90% there. I’m ready to write more in 2024!

  • some sort of commenting system
  • web mentions
  • rss feed
  • links working in in the breadcrumb nav
  • improve code reuse


  • Very happy with the work I’ve done in consolidating a bunch of code for my websites on *chiubaca.com into a single nx mono-repo at https://github.com/chiubaca/chiubaca-monorepo . My favourite aspect is that both notes.chiubaca.com and chiubaca.com are driven 100% my notes in this obsidian vault. This is made possible because both obsidian and Astro.js treat yaml frontmatter as first class citizens within their products so the I can do almost anything I want in terms of data manipulation as long as I have consistent frontmatter across all my notes. A big write up of this is needed

  • Configure Cloudflare DNS to Work with Netlify (stevepolito.design)


GitHub fine-grained personal access tokens play nicely with Tokens(classic) which is nice. So no code refactoring required if you’re using https://github.com/octokit/octokit.js.

Invoke GitHub actions manfully with workflow_dispatch

name: Create Empty Daily Note
  # run workflow every morning at 6AM
    - cron: "0 6 * * *"
  # run this workflow manually from the github Actions tab


  • Loading GLTFs in Mapbox w/ three.js is pretty dang performant! might not be necessary to use these solution like Cesium and Esri i3s. Need to improve my understanding of GLTFs in the new year.


Lets get back to journaling! starting a new job has naturally been crazy! Things I want to write about:

  • coming up with business ideas
  • separating my obsession with tools with solving real life problems
  • my experience working at different sized companies


Been working with QGIS more and more at work and it continues to delight. The model designer is as powerful as ArcGIS model builder. Managed to automate a raster processing pipeline that converts to pixels to geojson points all through the model designer without needing to read any documentation.


  • Its been fun revisiting A-frame after all these years, funny how i was able to hack away at a mapping demo so early on in my coding journey and now i’m being paid to explore to a real life AR mapping use case.

  • AR.js Location Based has been very fun to work with. It’s now a rabbit hole that will make me want to explore Aframe and three.js deeper…



The upside of imposter syndrome

“In those moments where you feel like an imposter… you realise ‘I have something to prove’, so you’re not complacent,”

  • Imposters have perfectionist tendencies, harbouring a secret need to be the very best at what they do. When they are unable to fulfil their perfectionist goals, “imposters often feel overwhelmed, disappointed, and overgeneralise themselves as failures”.

The Unexpected Benefits of Doubting Your Own Competence


  • How to solve cors issue in geoserver || GeoServer || GeoDev - YouTube

    • Navigate to your geoserver installation e.g C:\Program Files\GeoServer and look for the web.xml file under \webapps\geoserver\WEB-INF

    • uncomment the the filter settings . There is a comment    <!-- Uncomment following filter to enable CORS in Jetty. Do not forget the second config block further down.    -->

    • no need to comment out the cors setting regarding apache Tomcat

    • there is one more block which has a comment    <!-- Uncomment following filter to enable CORS --> . Enable the code below:

cross-origin /* ```


Mistakes I won’t make again:

  • Ignoring bugs that I can’t reproduce consistently, they’re the most important to fix.

  • Being so burnt out that fail to tell others I’m burnt out.

  • Not asking for support when I need it.

  • Not pushing for planning when I know we really need it

  • Pushing ahead with no real plan.


  • Working with Playwright a lot to crawl a website for data and its very fiddly but I’m making progress. The high level workflow that seems to be working is to use the Locators API to narrow down the block of elements i’m interested then using $() (get one element handle) or $$()(multiple handles) to parse the block and extract data from a DOM element.


  • Continuing to scout the current GIS landscape for current trends
    • GeoParquet - a new format for storing geospatial data
    • Overture Maps - a redistribution of OSM that enhances geospatial data. A new inituative by Amazon, Meta , Microsoft and TomTom . It outputs to geoparquet
    • DuckDB - file based high performance DB which can help process geoparquet to CSV or any other format
    • Protomaps/ PMTiles - a new way to host vector tiles efficiently


Agency life has been tough. No doubt about it. Reflecting on whats been good and whats been bad

The good:

  • Forced to learn quickly and you will level up in a short amount of time
  • Learn new tech (at least in my company)
  • Lots of projects so it’s hard to get bored

The bad:

  • No time to learn tech deeply
  • Ship fast and test nothing
  • Deadline pressures vary from project to project. But when its on, it’s on!
  • Lots of projects, but you end up just implementing the same thing over and over and over and over again…


Great to be getting excited about Geospatial things again. Starting to think about what a modern GIS stack looks like? When I last looked it was PostGIS , GeoServer , Leaflet / Mapbox. New things I want to explore. Vector tiles / ProtoMaps, DeckGL .


  • Been playing with react instant search hooks and I’m very impressed. They’ve thought of everything. This small bit of code does anything you want a search ui to do
import {
} from 'react-instantsearch-hooks-web';

import { history } from 'instantsearch.js/es/lib/routers';

const routing = {
  router: history(),

const searchClient = algoliasearch(

function Hit({ hit }) {
  return (
        <Highlight attribute="post_title" hit={hit} />

const Page = () => {

return (
      root: 'h-3',
      <RefinementList attribute="categories" />
      <Hits hitComponent={Hit} />
      <Pagination />
      <Login />


this gives a search box which populate the ui with results . we can filter by facets and there is text highlighting. to overidden classes we can use tailwind with classnames prop they provide. each class can be overridden.


Reading React Essentialsfor Nextjs 13.

  • Server components cant be imported into a client components. However if client components exposes a “hole” for a react node e.g with children or a custom prop, a server component can be passed through here. This means something like this is possible `
     <ChildServerComponent />
  • If server environment key which is not prefixed is leaked to client Next.js will covert this to an empty string
    • to prevent this there is the server-only package which can be installed with npm install server-only and imported in like so:
import 'server-only';
  • Context will work, but will require to probably wrap all third party providers in a use client component eg:
'use client'; import { ThemeProvider } from 'acme-theme';
import { AuthProvider } from 'acme-auth'; 

export function Providers({ children }) { 
  return ( 
  </ThemeProvider> );

Next js Framework Environment Variables


  • Notes from React RFC for first class support for promises

    • React client components cant be asynchronous, in React 18 if you need to run some async code you can now make use of the use hooks. This hook is special. For one thing. It can be used conditionally, which breaks one of the rules of hooks.
function Note({id, shouldIncludeAuthor}) {
  const note = use(fetchNote(id));

  let byline = null;
  if (shouldIncludeAuthor) {
    const author = use(fetchNoteAuthor(note.authorId));
    byline = <h2>{author.displayName}</h2>;

  return (

reminder of how to handle double fetching in client side react when using useEffect

useEffect(() => {  

  let ignore = false;  

	async function startFetching() {  
		const json = await fetchTodos(userId);  
		if (!ignore) {  


	return () => {  
		ignore = true;  

}, [userId]);


  • Arc Browser a.k.a The Browser company are one to keep an eye on. They’re making cross platform app development possible with Swift. Swift is a language i feel like ive been sleeping one. It looks super ergonmoic to work with and many greater ios apps have been developed with it. It promises a UI development DX as good as React. https://www.youtube.com/watch?v=Xa_fNuaSE_IaaA


rewriting jweidesigns.com with next 13 app dir today some thoughts:

  • typing url params stills sucks , there has got to be better way:
type BlogSlugPageProps = {

params: { slug: string };



export default function BlogSlugPage({ params }: BlogSlugPageProps) {

const slug = params.slug;


return <>Slug page | {slug}</>;

  • you can’t “opt out” of global layouts. Instead its important to think about layouts in terms of layout groups. But how do you wrap the index page into a group? 🤔

turns out this the more informative doc that explains how route groups work https://nextjs.org/docs/app/building-your-application/routing/route-groups

  • it works by wrapping sections of your UI code into folders named with parenthesis


Not kept up my notes for soo long because of work projects! 😭

  • I have been working with framer motion. The big revelation I’ve had with framer is thinking about everything as variants . When we key our animations objects into a variants object like so…
const container = {

	hidden: { opacity: 1, scale: 0 },

	visible: {
		opacity: 1,
		scale: 1,
		transition: {
			delayChildren: 0.3,
			staggerChildren: 0.2

… the magic behind framer motion is that it knows how to tween between the two animation states when you provide the keys of the variants objects into Framer component props e.g animate

  • Things get really interesting with framer when you pass motion values into motion components. A motion elements is a component that looks like this
import {motion} from 'framer'

<motion.div> I behave like a regular div </motion.div>

The styles tag/prop has superpowers now and can accept “motions values”

function Component() {

	const x = useMotionValue(0)
	useMotionValueEvent(x, "animationStart", () => {
		console.log("animation started on x")

	useMotionValueEvent(x, "change", (latest) => {
		console.log("x changed to", latest)

return <motion.div style={{ x }} />


in this example, the x value can be passed into style and the component can magically animate the x value of the component without tonnes of re-renders. Framer does its animations outside of the React re-rendering process


  • render children when using forwardRef in React
import * as React from 'react'

type ButtonProps = React.HTMLProps<HTMLButtonElement>

const FancyButton = React.forwardRef<HTMLButtonElement, ButtonProps>((props, ref) => (
  <button type="button" ref={ref} className="FancyButton">

// You can now get a ref directly to the DOM button:
const ref = React.createRef<HTMLButtonElement>()

<FancyButton ref={ref}>Click me!</FancyButton>


  • git-sim is a tool to visualise your git actions. installing on a mac m1 is awkward though:

    - brew install py3cairo ffmpeg
    - pip3 install manim
    - pip3 install git-sim
  • then we can run something like like git-sim merge main


  • Remix pushes you to use <form/> to make calls to the server. We’re probably more used to using post with something like fetch or axios but remix teaches you of vanilla form actions. When tallied up to an action function to process the function server side it makes for a really tidy DX. Interesting form actions work with the browser <form> element which will cause a full page reload. If you need a more SPA feel they also provide a Remix <Form> components.

random tinkerings:


  • The user facing software spectrum

  • good things about Remix

    • simple data flow : -
  • Load the data right in the same component. (next.js 13 can do this too now…)

  • Mutation with form actions handles server-side write functionality cleanly.

  • confusing things about next.js:

    • The flow of data in the app is confusing -
      • Map this order to file structure and you can see how confusing it is -


when a div is absolutely positioned and you want it to fill its parent you can use inset: 0 which is the same as :

  top: 0;
  right: 0;
  bottom: 0;
  left: 0;

CSS radial gradients can get complicated. At its most basic its a function that can take as many colour properties as you want and it will distribute it evenly in circular fashion.

  background: radial-gradient(red, green, blue);

We can add a percentage that defines each colour’s stop points. CSS blends the colours for us.

  background: radial-gradient(red 10%, green 20%, blue 80%);

ok, so is also a secret optional preliminary argument too… MDN documents the radial-gradient function like so:

radial-gradient( [ <ending-shape> || <size> ]? [ at <position> ]? , <color-stop-list> )  

<ending-shape> - can either be circle or elipse , elipse is basically just a stretched circle to match the aspect ratio of the element it’s in.

<size> - has four options documented here

  • closest-side
  • closest-corner
  • farthest-side
  • farthest-corner - default

<position> - defaults to center but supports an x and y . This is the param that we can make dynamic via CSS variable to achieve interesting effects like a spotlight effect

so our simple CSS radial gradient could now look like this:

  background: radial-gradient(farthest-side circle at var(--x, 0%) var(--y, 10%), red 10%, green 20%, blue 80%);


  • ChatGPT reminding me how to do SVG animations:

rotate a svg

To animate a group of SVG elements and make them spin indefinitely, you can use the <g> element to group them together, and then use the <animateTransform> element within the <g> element to animate the group as a whole. Here is an example that shows how to create an infinitely spinning group of SVG elements:

<svg>   <g id="group">     <rect x="10" y="10" width="100" height="100" />     <circle cx="50" cy="50" r="40" />   </g>   <animateTransform xlink:href="#group"                     attributeName="transform"                     type="rotate"                     from="0"                     to="360"                     dur="3s"                     repeatCount="indefinite" /> </svg>

In this code, the <g> element is used to group the <rect> and <circle> elements together, and the <animateTransform> element is used to animate the group by rotating it indefinitely. The xlink:href attribute of the <animateTransform> element specifies the target of the animation (in this case, the <g> element with the id of "group"). The other animation attributes, such as attributeName, type, from, to, dur, and repeatCount, are used to control the animation in the same way as described in the previous answer.

rotate from the center of the svg

To rotate an SVG image from its center, you can use the transform-origin attribute and specify the center point of the rotation. Here is an example that shows how to rotate an SVG image from its center:

  <image xlink:href="image.svg"          transform="rotate(45)"          transform-origin="50% 50%" 

In this code, the transform-origin attribute is set to "50% 50%", which specifies the center of the rotation (i.e. the center of the image). The transform attribute is used to specify the rotation angle (in this case, 45 degrees).

Alternatively, you can use the <g> element and specify the rotation transformation and the transform-origin attribute on it, like this:

 .  transform="rotate(45)"      
    transform-origin="50% 50%">     
	  <image xlink:href="image.svg" />   

In this case, all the elements within the <g> element will be rotated by 45 degrees around the center of the group.


  • Carousels have so many types of UX
    • Draggable
    • Draggable with snapping
    • Free slide
    • Paginated
    • Slidable with scroll wheel

The most basic carousel if probably with just a css overflow and snapping . But the moment you want “drag” functionality, this logic doesnt work and we need to work with transforming a div by “sliding” it along an x-axis. Once we do this we need to do all the hard work of figuring out the left-x position of every div slide. These become our slide points which can be used for writing our own snapping functionality and pagination


  • ChatGPT is wild, asked for function to find closest number given an array of numbers and got :

To find the closest number to a given number in an array of numbers, you can use a combination of the Math.abs() and Math.min() methods.

The Math.abs() method returns the absolute value of a number, which is the number without its sign (positive or negative). This is useful for comparing the distance between two numbers, because the distance between two numbers is always positive.

The Math.min() method returns the smallest number in a given array of numbers. This is useful for finding the smallest absolute difference between a given number and the numbers in an array.

Here is an example of a function that uses these methods to find the closest number to a given number in an array:


function findClosestNumber(numbers, target) {
	const distances = numbers.map(number => Math.abs(number - target));
	const closestIndex = distances.indexOf(Math.min(...distances));	  
	return numbers[closestIndex];


const numbers = [1, 5, 10, 15, 20];
const target = 12;

findClosestNumber(numbers, target)
// nearest will be 10


  • To allow for full bleed elements in the CSS grid, we can’t use max-width on inner elements because we can’t “break out”. Instead, we can define the outer gutters as auto-growing columns. but what if we want to define the inner container as a specific size?
.layout {
  display: grid;
    calc( 50vw - 500px )
    repeat( 8, 1fr )
    calc( 50vw - 500px);

This trick means we have an inner grid of 8 columns that is constrained to just 1000px . as the outer gutters are half the size of the screen minus max-screen size / 2 . For some reason the maths checks out!



  • track the previous state in react 18 friendly way. No useEffect required.

const [count, setCount] = React.useState(1);
const [tuple, setTuple] = React.useState([null, count]);

if (tuple[1] !== count) {
	setTuple([tuple[1], count]);


set default nvm version: nvm alias default 6.11.5 if you want it pegged to that specific version.

You can also do nvm alias default 16 or nvm alias default node.

Either way, you’ll want to upgrade to the latest version of nvm (v0.39.2 as of this writing)

# nvm set default node.js version 16.14.2
$ nvm alias default 16.14.2
$ nvm use

$ node -v
# v16.14.2


  • Started work on a new project to sync a video with geojson route. It’s purpose is to pair up a running video with moving marker on a map so you can see where you are in video relative to geo location.


  • Messing around with the weird idea in Svelte kit
    • Render all Hacker news link URLs
    • Render article URL in an Iframe
    • Invert the colour of the page with CSS to have a dark mode on all articles which doesn’t support it! In conclusion, it works! but learnt that a website can deny their site being renderable on a website with X-Frame-Options: DENY
    • https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/X-Frame-Options

There are interesting hacks around this to bypass the X-Frame settings. But probs not a good idea. The better solution would be to pre-render a website server side and proxy the content on your own domain.I assume this is how something like a pocket works.


  • localhost on a docker container is scoped to it’s own internal network. So if you want localhost to refer to the “HOST” i.e the machine running docker, on mac we can use host.docker.internal -

  • ok so after battling with docker and fighting with networking issues for fly.io. A quick test with Railway has blown my mind. It lives up to the it just works motto and I can just deploy the same src code and run commands which I run locally. Railway seems like the clear winner with regards to hosting a quick node server easily and quickly!


  • Breakthrough with deploying medusa server to flyio. just need to figure out why manged fly.io redis service cant communicate with it’s own internal app service
2022-11-05T18:35:07.117 app[c84f0e2a] lhr [info] [ioredis] Unhandled error event: Error: getaddrinfo ENOTFOUND fly-medusa-redis.upstash.io

following chats indicate that it’s because the node app service needs to have it’s redis client configured to accept an IPv6 connection.


  • docker-compose is a docker script to run multiple docker commands in one go. This is configured in docker-compose.yml file. As basic one looks like this:
# version of docker compose
version: '3'

# We can setup multiple services in one go
# provide the name of the service, can be anything
# everything below here are the same docker cli commands
	build: .
		- "3000:3000"
	# docker compose let you use relative path for volumes
		- ./:/app
	# this is a trick so node_modules dont get overridden
		- /app/node_modules
	# we can provide envs either explcity or with env file
		- PORT=3000
		# env_file:
		# - fileName

execute by running docker-compose up . To tear down docker-compose down additionally provide -v flag to remove related volumes.

docker-compose also builds the image and will cache this step if it can.

note: if you update your Dockerfile, docker-compose is not smart enough to know to rebuild the image. use the --build flag to force a rebuild of the image

  • its possible to have multiple docker-compose.yml files perhaps you want a docker setup for dev and another for production.

    • setup docker-compose.dev.yml and docker-compose.yml - https://www.youtube.com/watch?v=9zUHg7xjIqQ&t=4896s

      • then when it comes to running these files we can run docker-compose -f docker-compose.yml docker-compose.dev.yml -d which will merge the docker-compose files together. the later script will override the former script
  • you can have if else statements in your Dockerfile

  • ARG are build time environment variables



  • If we want to use docker for local dev, changes don’t come to get synced magically. a new image needs to be rebuilt and run every time. A workaround for this is to use a bind mount volume. The full cmd would look something like:
docker run -v <full-path-from-host>:<docker-path> -p 1234:3000 -d --name node-app node-app-image
  • if we want the bind mount to be read-only we can use ro

-v <full-path-from-host>:<docker-path>:ro . this means docker cant create new files on the host machine.

  • To use environment variables set by a user we can use the $ syntax in our Dockerfile
This mean can use the following cli flag when running the container `docker run -env PORT=<PORT-number>`



A basic docker script for a node.js app:

# Mandatory, every dockerfile must be based on another image
FROM node:16


# This sets the current working dir, all further action will be made relative to this path


# We copy the package.json and install before deps first for performance reasons.
# This is the longest step so dependencies are cache for subsequent steps
COPY package.json.
RUN npm install

# We can now copy the rest of the source code across, we don't need re-run npm install as dependencies are installed already

# '. .' looks kinda confusing. This just means copy everything from dev working dir to the docker working dir

COPY . .

# This doesnt actually do anything and acts more as documentation that port 3000 is being exposed and can be port forwarded for external access

# run the bash commands that will start the app

CMD ['node', "index.js"]
  • docker caches these steps so a second run will generally be faster

  • to build an image run

docker build . -t node-app-image

. is the path of where the Dockerfile is -t the name of of the image

to run a built image

docker run -p 3000:3000 -d --name node-app node-app-image
  • p 3000:3000 is how we forward ports from the host machines to correct exposed port on the docker container.
    • port on left === port being accessed on host machine
    • port on right === port to direct to on docker container



  • Finally getting a chance to tinker with fly.io!

    • launch a Postgres DB with flyctl postgres create
  • If you want to connect to a fly.io postgres with PgAdmin you need to proxy the internal url to localhost and on specfy port with the command

fly proxy 6543:5432 -a app-name Proxying local port 6543 to remote [app-name.internal]:5432

this exposes internal fly.io db to localhost:6543



  • Was at Next.js conf in London and saw the release of next 13
    • Turbo pack the long-awaited update to webpack
    • next js layouts and new router
    • react server components for simplified data fetching no more get static/server side props. just fetch and figure out if you want to cache it.


  • Slowly but surely getting my head around custom medusa plugins works a few hurdles to get by:
    • Classes in TS are still confusing to me, especially when dependencies are automagically injected into classes for you

    • Learnt an Interesting pattern of taking injected dependencies and binding them to your local class.

    • when creating methods in Medusa custom services, these are exposed to your API, loaders and subscriptions which means your services are a giant class full of business logic. There’s lots of magic around dependency injections and a new term of dependency container as under the hood Medusa makes heavy use of Awilix.

    • hit a pretty bad bug where I cant create products to the medusa database!? - https://github.com/medusajs/medusa/issues/2034


  • tinkering around with how medusa plugins work. They effectively let you isolate logic which has a specific purpose whilst still giving you the full hooks and features of the medusa server, the core building blocks we can work with are:

    • the api these custom URL endpoints which can run any arbitrary code when hit
    • the service, the most important aspect which is a class in that you can implement custom business logic in methods. Medusa exposes these methods to all aspects of your plugin including the api and loaders
    • finally, loader from what I understand is a functionality that you can call on a certain lifecycle. such as the startup of the plugin. You can even time to run via a cron job. - https://docs.medusajs.com/advanced/backend/cron-jobs/create/#2-create-cron-job
  • plugins are faff to work with right now, so I’m working around it by developing against a medusa server. the process of extracting this logic out of medusa server and and into a plugin looks straight forward though as extending a medusa server and plugin have the same directory structure. Keeping an eye on the following github issues:


How I think this Printful - Medusa integration is going to work


  1. Sync Printful store with Medusa

    • This includes all products and variants
      • Shipping cost
  2. Continuously keeping Medusa in sync with Printful may require setting up webhooks on Printful for when items are deleted, updated etc


  1. Render all products - https://docs.medusajs.com/references/js-client/classes/ProductsResource

  2. Use Medusa.js for carts/checkout - https://docs.medusajs.com/references/js-client/classes/CartsResource a. I would need a wire up the printful tax calculator around here too - https://developers.printful.com/docs/#tag/Tax-Rate-API

Regarding the payment and fulfilment with Printful. I’m unsure of the correct flow for this. In my head this how I think it should work.

  1. Confirm payment is received from the customer in Medusa.
  2. on the order.payment_captured event, I could fire off a Printful order via their API with details captured in Medusa- https://developers.printful.com/docs/#tag/Orders-API
  3. ???
  4. Profit ?!

  • tunnel localhost to the web with ngrok


Testing out Medusa for shiba swag store

  • There are 3 components to a Medusa store:

    • The headless backend is a node server.
    • The Admin UI, a gatsby application creates a UI for the headless backend. Live demo admin UI here.
    • The Storefront. This can be any UI framework you want and Medusa has a comprehensive JS API to communicate with the backend. There is a Next.js starter.
  • create-medusa-app is a handy tool to scaffold up all three components quickly.

  • Looks like there is a Printful integration already - https://github.com/olivermrbl/medusa-printful. EDIT, doesn’t work…

  • Looking into creating my own medusa plugin to integrate with the medusa platform, this could be a powerful workflow…

  • Conclusions after research today. Medusa provides utilities for aspects of an eCommerce store. the bits I’m most interested in is :

    • The admin store for organising orders and payments.
    • Unified SDK API for stripe and PayPal
    • Its just node API server which is extensible. Includes a bunch of bells and whistles like cron jobs. batch jobs
  • The main challenges come around syncing back and forth between Printful and Medusa.

    • what happens when 1 item changes? do all products from the Medusa DB need to be wiped and updated or is there a tidy way to incrementally updated?
    • webhooks to update things? Need to understand how the Medusa events service API works in more detail.
    • payments are just generally very confusing.


  • Thinking about making OS-like experiences and I’ve gone from Astro to Svelte and back to Astro again. I think I’ve cracked the “installation” mechanism. This lets me “install” an app which is using any front-end framework all loads lazily thanks to how Astro works. This opens up the ability to build my digital garden of random hacks which is only restricted by Astro adapters.


Stable diffusion experiments today!

  • first off playing around with Diffusion Bee - Stable Diffusion GUI App for M1 Mac . A one-click install solution for stable diffusion.

    • it runs slow! 5 - 10 secs for 384 X 384 images. 768 x 768 takes 5 - 10 mins
    • The GUI is also quite limited, no image to image yet
    • Pretty nice install experience though!
  • Hosted Stable Diffusion - https://beta.dreamstudio.ai/dream

  • Time to explore if my Steam deck can run a local version of stable version with faster speeds than my M1 mac book…

    • ok first run crashed my steam deck becasue it was running in CPU mode.
    • interesting video here but need a compatible AMD GPU which i’m not sure the Steam deck has..


  • Finally got SAML Jackson and Next Auth working! the main revelation I had was that the IDP and next.js application never communicate with each other. The SP i.e SAML Jackson is the middleware between both parties and thus configuration will always need to be pointing at the SP only


  • The Changelog – Episode #506 # Stable Diffusion breaks the internet

    • There are now 3 main text-to-image products DALLE 2 , Midjourney and most recently Stable diffusion
    • Stable diffusion is completely open source which means it’s open to hacking!
    • Is text-to-image harmful to artists?
    • The tech is out of the box now and can’t be undone. How do we move forward so that artists are not left in the dust?
    • Is this similar to Github copilot and devs?


Create a Next.js App in the Current Directory:

npx create-next-app .

lets try to learn Docker again ! https://www.youtube.com/watch?v=zJ6WbK9zFpI

Docker Intro🔗

  • containers vs VMs

    • containers sit on top Docker which sits on top of any OS
    • VMs sit on a Hypervisor which need to house the entire OS making them more CPU and disk space extensive and larger to transfer
  • image vs containers

    • Docker images are packages or templates which can create one or more docker containers.
    • Container are running instances of images, they have their own cpu and resources
  • Dockerfiles are used to create docker images

(stoppped at https://www.youtube.com/watch?v=zJ6WbK9zFpI)


  • Trying to wrap my head around BoxyHQ SAML Jackson. Things I dont understand right now:
    • when in NextAuth what are the clientId and clientSecret ?
    • How does SAML link back to Azure AD?
    • is https://<SAML-JACKSON-URL/api/oauth/saml> the SAML ACS?


  • Working with zustand a whole bunch and trying to figure the best pattern for it. I’ve using it like a svelte store replacement but seems cleaner to tightly couple it to react hooks to make hooks persist state across different instances to create singleton like pattern.


  • Learning about Oauth for work watching this - https://www.youtube.com/watch?v=CPbvxxslDTU

    • OAuth never passes passwords around but instead authenticates users with open id connect
    • OAuth is an authorisation framework and provides a mechanism to authenticate via a central authorisation server
  • Full workflow will look like the following:

  • What is “auth”

    • Auth is authentication( who are you?) and authorisation (can you?).

    • Auth always happens server side (on a per request basis). never client side.

    • You client (e.g a React app) can only understand “who” the user is via server request where the server returns a cookie/token back to client.

    • cookies are usually best as we can use restrict javascript access to cookies

    • All subsequent requests to the sever will contain this cookie and the server can validate wether the client can or cannot access a resource.

  • Authorisation workflow usually need to be coupled with a db. Your User schema in the database could have a Role feild with either USER or ADMIN . Access to resources subsequent could simply be done by checking this value first.


  • react-map-gl is the way to go when working with mapbox in react. it handle the initialisation of mapbox all for you and lets you compose map box elements like attribute , points layers in an idomatic react way.

  • I cant get useMap to work correctly.I thought it was used to target a map instance. but it always return null? Instead I’ve resorted to binding a ref to the Map component which seems to work…

export const MapExample: React.FC<HomePageMapProps> = ({ clientLocation }) => {

	const { current: currMap } = useMap();

	useEffect(() => {
		if (mapRef.current && clientLocation) {
				center: [clientLocation.long, clientLocation.lat],
				speed: 0.8,
				zoom: 10,
	}, [clientLocation]);


	return (
				zoom: 1,
			{clientLocation && (



navigator.permissions.query({ name: 'geolocation' }).then((result) => {
 if (result.state === 'granted') {
 } if (result.state === 'prompt') {
 } if (result.state === 'denied'){
   // fallback behaviour
 // Don't do anything if the permission was denied.

getCurrentPosition(success, error, options)


  • revisiting zustand. its a replacement for context and useState . It makes sharing this state across components really seamless.


  • Trying out supabase cli and attempting to create work flow to sync of local dev with hosted environment and having a pretty terrible time:
    • how is branching supposed to work?
    • my dev and remote dev migrations are out of sync, but i have no idea how to fix?
    • do I dev on local and push to remote? or work on remote and sync with local?
    • supabase db remote changes takes forever!?

  • settling on a much simpler solution to generate type and finally have a chance to use zx to write bash with javascript. it’s so good!

settling on this

import "zx/globals";
require("dotenv").config({ path: ".env.local" });


await $`npx openapi-typescript ${SB_PROJECT_URL}/rest/v1/?apikey=${SB_ANON_KEY} --output types/supabase.ts`;

When in TRPC land handle errors with trpc.TRPCError and dont return an error object like in GQL


  • So much trouble with double controlling re-renders in react! Do you just ignore it ? or try to nip it before it becomes too much of an issue later?!


  • Even through the Svelte API is amazing and way more easier to integrate with third party libs like mapbox. Came to conclusion I wasnt ready to use svelte kit yet just due to the fact there was a few thing I wasnt clued up on. e.g I dont understand


  • Looking at svelte kit again
    • +page.js is a new convention that indicate there is some additional server logic that needs to be run before the page is rendered. Is this the equivalent of getServerSideProps ?

    • Nested layouts

      • note +layout.js mean each layout can also have it own server side logic


  • lost hours to the React useEffect firing twice in development
    • when trying to use Supabases realtime functionality the double firing causes the subscription to close, so trying to make react run unmount and remount in quick succession is really hard! Ended up using a really sketchy hack


- supabase cli is lookiing nice! - support for migrations - db branches with syntax similar to git

supabase db branch new my_branch
supabase db branch switch my_branch
  • Type generation

#app-idea obsidian as a admin blog editor for a static blog site

#blog-idea lessons for working at digital web agency

  • dealing with imposter syndrome
  • going slow to work fast later
  • managing pressure of deadlines
  • mentoring junior devs
  • working with lots of different tech all at the same time
  • learning quickly
  • not having time to work on side projects anymore
  • Working with really smart people
  • Working with really talented designers


  • Tech decision regrets i’ve made recently thats getting me down:
    • desktop first design
    • building without a typography design system
    • building without a data architecture
    • building without understanding full requirements


  • Day 3 of using Arc. tinkering around with Boost. a way to write javascript for a website that updates the look and feel and behaviour. it kinda like writing extension for your self without all the hassle. Love the default example is how to replace web 3 with a butt!
    • I really miss profiles. seems like a important feature to not have


  • Day 2 of using Arc Browser - tab split screen is quite a game changer - as it chromium based all chrome extension still work without any problems - Extensions to update css on a website is kinda cool - Notes are a thing?! I already use obsidian for notes so not sure if this will fit into my note taking workflow - Feature idea, would be good if tabs could be grouped in tree view e.g all my github tabs are automatically grouped together - the mini media controls when your playing music on a different website is such a nice touch!


  • I got access to Arc browser today!!
    • Day 1 impressions, the intro is an experience! Something akin to an Apple unveiling. Hypes you up to do something we do every day. Browse the internet
    • Side bar is neat. like the two finger swipe to go between spaces
    • hiding the sidebar and having a completely clean browsing experience is refresh!?
    • Control T for a built in browser omni bar is such a great a great UX. I think omni was way ahead of the curve on this one though.
    • Easels are gonna eat Exalidraw lunch?! (maybe not Exalidraw is much better!)


  • Managed to chip away at shiba swag store a little bit last night. Back on the pomodoros for little bust of productivity. Seems to work great for working on side projects after work when I’m mentally drained.

  • Thinking of creating a chrome extension that can puts you into a focus mode with a pomodoro timer along with lofi girl running in the background. For the duration of the timer it blocks twitter and other social medias.


Reading through the next.js layouts RFC pt 1 .My current mental model: - There will be a new /app directory which can used to incrementally port over component pages from /pages. - at the root of the app we can define a layouts.tsx file which will nest all components. This replaces _app.tsx and _documents.tsx . getStacticProps and getServerSideProps works here which open up the ability for fetching global data across your website without needing to run the data request on every page. - Folder based routing still existing, but index.tsx will now need to be named page.tsx . This means we can colocated test files and other things without needing the page extensions API. You can also provide a layout.tsx in each nested route which will be nested in its parents layout. - There appears to be no API to override the layout right now? - all components will be react server components by default. This opens up the ability to run getServerSideProps and getStaticProps on component!? (need to verify this)


  • Spicy take on why not to unit test
    • instead optimise CI to forward fix quickly
    • we should build out applications to fail gracefully so we can identify them quickly
    • unit tests slow down development
    • unit test prevent functionality
    • instead we should focus on build paths rather than block them
    • 100% code coverage is probably stupid


  • generate an array of number:

// (8) ['a', 'b', 'a', 'b', 'a', 'b', 'a', 'b']
  • this trick works with jsx too. useful for component mocking in storybook,.
const items = Array(100)


  • I really want to see if Remix + fly.io can do everything I want with shiba swag store.

  • Re-reading this article about aspect ratio

    • Aspect ration used along with object-fit is probably what you want with content like blogs
    • dimension hints need to be correct otherwise you’ll get layout shift. Use when confident of dimension and are probably best used for design type elements.

  • I want to get good at yoga and golf.


  • imposter syndrome kicked in hard again today. I’ve learnt:
    • just because I can make something work it doesn’t mean its implemented right
    • I’m going through multiple Dunning Kruger’s effects
    • Developers will also re-write everything given the opportunity


  • Round 2 of playing with remix , thoughts so far:
    • loader function to fetch data for the client is really intuitive. I like how if you wanted you can put all that logic into a model dir. Its very flexible
    • Mutation of data via forms is also a very nice API. Dispatching an action to the server via a Form feels like black magic but in action works really well. to have the data instantly available in another function which is automagically the backend streamlines things massively.
    • the line between front and backend feels blurred with Remix


  • Revsiting CSS grid again

    • aligning all grid child elements:
      • justify-items - will justify an individual grid child child to either start , end or end , by default it is stretch

      • align-items can move an grid child on the y axis with same properties , but also supports center

  • aligning the entire grid container

    • justify-content -same properties , but also support space-around , space-evenly and space-between
    • align-content - same properties
  • child css grid element properties

    • align-self
    • justify-self
    • place-self - combines both above 1st arg is vertical a.k.a align second arg is horizontal a.k.a justify



  • [Notes from # “Deno and Remix” by Ryan Dahl at #RemixConf 💿](https://www.youtube.com/watch?v=pBcFJmQ6UVM)
    • Deno deploy + fresh can be deployed and distributed to world CDNs in less that 10 seconds. This due to fresh having 0 build time and JIT compilation of the framework when requested at the edge

    • We’re moving to post linux era for deployment. Deno deploy is built of V8 isolates. Which is also the reason why cold starts are so fast. more on V8 isolates here

    • Fresh is edge native framework. this mean it was built to work directly with deno deploy and v8 isolates.

      • Deno deploy is a runtime built on top v8 but also has built in native web APIs e.g fetch. So this means fresh is built to work with Deno directly.
      • Remix can be run on deno deploy but after a compilation step. This means that remix is not “edge native”


  • back on a next.js project with tailwind and lots off css grid. Not feeling productive right now as I’ve sort of forgotten the tailwind syntax and also my mental model of css grid is not super great. need to revise css grid!


  • classic case of aggravating over a bug for hours on end, then finally solving it in about 30 mins after the weekend break.



  • Was accepted into the Printful affiliate programme. I can enable others to setup their own swag stores then I could also earn commission that way too! shiba swag store merely needs to be a template for others.

  • #blog-idea “Your Using TypeScript Wrong”. a blog on how to use TS correctly when working with third party rest APIs and on how to type the any response.


  • Committing to using zustand for global state management for shiba swag store. decided it’s probably an anti-pattern to share around hooks in global state which is what I was doing with react context providers… When using zustand keeping things simple with basic primitives seems to be the way

    • encountered some weirdness when using persist middleware which was fixed by this suggestions


  • learning a lot about the sanity structure builder. The API is a little confusing but I can see how it’s very extensible and power especially when you simply just bring a react component anywhere…

  • zustand is a bit of deal breaker due to the fact you can initilise hooks inside a store. For example I want to to intialise chakra ui’s useDisclosure and share it as a globally accessible action. Dont think this is possible with zustand.


function isNonNull<T>(val: T | null | undefined): val is T {

	return val !== null && val !== undefined;


const cleanupNullsinArray = cleanupNulls.filter(isNonNull);



  • GQL codegen was terrible idea due to fact the TS interface it generates is optional for everything, meaning defensive code for everything. Also the respoinse we;re getting back doesnt match the query given. so it’s one big lie?!

  • if we’re getting the fill payload back regardless of using groq codgen over GQL codgen. groq codegen is superior as at least it honours required feilds.


  • reviewing CMSs that are best fit for. Creating an authoring experience with sections that can reordered by an end user.
    • GraphCMS gets pretty close with components , but currently with the lack of nested components , the authoring experience starts to fall apart
    • Currently reviewing Sanity. It’s a lot more mature and also flexible. I think If I follow this article we can achieve the editing experience I’m looking for
      • I need to learn more about GROQ
      • They support GQL


  • Made coming-soon.shibe.store live!
    • for what is a seemingly a little landing page there was a lot of code to write!
    • Font-end Astro.js with a few components like the twitter icon and the shiba logo took a while to nail
    • Backend is all netlify functions one to confirmation email to signed up user, then a webook to save to supabase, then another webhooks to track when the email was opened
    • Kinda chuffed with the front end has 0 JS except for some analytics code. The modal is acheived by using css :target which targets then a url query param appears which is updated by a HTTP header via a netlify function


  • Playing around with Figma lot for shiba swag store. Iove how Figma thinks about design from a web development perspective.
    • Mask in figma are a way to clip one layer with another layer like a cooke cutter effect. The thing to note is the layer you want to use as the “cookie-cutter” needs to be behind the layer your want to cut before you select both layers then click the “mask” button.


tldr, heres the code

let crypto;
try {
  crypto = require('node:crypto');
} catch (err) {
  console.log('crypto support is disabled!');

const apikey = "this-would-be-a-hash-provided-by-mailgun"

// this would be available in the event.body of the webhook response
const signature = {
    token: 'jkashdjkashjkdhaksjhdashk',
    timestamp: '1653941793',
    signature: 'hjdkashdjkashdaskh

const key = signature.timestamp + signature.token  
const hash = crypto.createHmac('sha256', apikey)  

console.log(hash) // this should match signature.signature


animated SVG strokes - https://svgartista.net/


  • Remix vs Next.js - https://www.brenelz.com/posts/next-vs-remix

  • https://css-tricks.com/linearly-scale-font-size-with-css-clamp-based-on-the-viewport/

  • new useEffect docs https://beta-reactjs-org-git-effects-fbopensource.vercel.app/learn/synchronizing-with-effects

    • Unlike events, effects are caused by rendering itself rather than a particular interaction.
    • Effects let you synchronize a component with some external system (third-party API, network, etc).
    • By default, effects run after every render (including the initial one).
    • React will skip an effect you specify dependencies, and all of them are the same as during the last render.
    • You can’t “choose” your dependencies. They are determined by the code inside the effect.
    • An empty dependency array ([]) corresponds to the component “mounting”, i.e. being added to the screen.
    • When Strict Mode is on, React mounts components twice (in development only!) to stress-test your effects.
    • If your effect breaks because of remounting, you need to implement a cleanup function.
    • React will call your cleanup function before the effect runs next time, and during the unmount.


  • Got mailgun configured to send emails from my chiubaca.com domain.

    • Had to add a bunch of TXT ,MX and CNAME records to my domain DNS which was not on google domains but on Netlify as the namespaces had been forwarded to Netlify. #note-to-self
  • When sending emails via EU region on Mailgun , make sure to include the url property when setting up the mailgun client

const API_KEY = "xxxxxxxxXxxxxxxxxxxxxxxxxxxx-xxxxxxx-xxxxxx";
const DOMAIN = "mydomaim.com";

const formData = require('form-data');
const Mailgun = require('mailgun.js');

const mailgun = new Mailgun(formData);
const client = mailgun.client({username: 'api', key: API_KEY, url:"https://api.eu.mailgun.net"});
// console.log(client)

const messageData = {
from: 'Yoopster <[email protected]>',
to: '[email protected]',
subject: 'Hello',
text: 'Testing some Mailgun awesomeness!'

client.messages.create(DOMAIN, messageData)
.then((res) => {
.catch((err) => {
- https://stackoverflow.com/a/71574225/7207193


  • lots new web features announced at google.io
    1. Accent color (CSS)
    2. Dialog (HTML element)
    3. Cascade layer (CSS)
    4. bF(back forward) Cache
    5. Privacy sandbox API
    6. Topics API
    7. One step towards password less
    8. Eye dropper API
    9. Virtual keyboard API
    10. Web codecs API
    11. Window control overlays
    12. Navigation API
    13. Page Transition API
    14. Performance analysis panel



  • Work talk about learning how to learn:
    • Repetitive space learning
    • read doc / text books from cover-to-cover
    • read source code
    • make things with side projects


  • Been thinking about how to level up my FE skills and take it to the next level. Perhaps reading source code of Chakra UI would help, keen to learn how you make a truly scalable and adaptable design system which is also a11y friendly…


  • Implementing GTM at work, Google analytics has changed soo much , “universal analytics” aka old GA is what i was used to which is now moving to GA 4.
  • Also GTM is the thing that lets you enable lots of other tracking including GA4. Kinda scary to see how these script tags can inject other script into your website and potentially could just highjack your whole website…


  • Trying to integrate Google Places API into shiba swag store. Interesting to see there is not an isomorphic npm package for google maps sdk. For the browser you have to use a script tag similar to Paypal and Stripe

  • Google cloud platform has a horrible onboarding as they make enabling billing mandatory.


  • Starting to understand the PayPal SDK a bit better… cross referencing between three different packages is very confusing react-paypal <-> paypal-js <-> PayPal JS SDK <-> PayPal REST API
  • The React package has been OK to use so far, the PayPalButtons components have 1-to-1 mapping to to the paypal-js package so far, except I’ve had lots of problems trying to get the onShippingChange to patch the PayPal order in order to cross check Printful for shipping estimations.
    • going to have to do this a different way. 1) collect the users billing and shipping address upfront, 2) send these details to PayPal via onShippingChange callback.
    • Yes this means more to code, but also means it should make this feature ready for Stripe integration later down the line.


  • Writing up notes on paypal-sdk . It’s not been an easy thing to grok. key take away, you need to reference the REST API to understand the JavaScript SDK, you need to need reference the JavaScript SDK to understand the paypal-js, you need reference paypal-js to understand react-paypal-js. Wrapper inception.


  • Lazy day in the resort, working a little bit on shiba swag store. doing import stuff like setting up eslint for import ordering sarcasm . But finally have some down time revisit the eslint docs for import/order

  • I want to have react and next libs grouped at the top, node modules , internal modules, then relative files in spaced groups. I’ve kinda got it working with the following eslint config:

  "rules": {
    "@typescript-eslint/no-unused-vars": "error",
    "@typescript-eslint/no-explicit-any": "warn",
    "@typescript-eslint/ban-ts-comment": "warn",
    "import/no-cycle": [1, { "maxDepth": 1 }],

        "groups": [ "external", "builtin" , "internal", "sibling","index" , "type" ],

          "pattern": "{react,next,next/**}",
          "position": "before",
          "group": "external"


        "pathGroupsExcludedImportTypes": [],
        "newlines-between": "always"



  • The key part of the config pathGroups which takes a minimap expression . not something I’ve heard of before , but this minimap cheatsheet was useful.



  • Really enjoying with Chakra UI so far, it’s provide low level utility prop similar to utility classes in tailwind. It makes a lot of sense and is very intuitive.

  • shiba swag store progress. slowing building up the shopping cart UI, it’s not been to bad so far.


  • Thinking about the architecture of shiba swag store.
    • React FE
    • Supbase backend for tracking state of transactions
    • Printful for inventory and shipping


Shared some advice with a developer on a bootcamp on staying motivated:

  1. Build a streak, Joining in with #100DaysOfCode was super motivating for me.
  2. Work in small and consistent chunks. In the beginning, I liked to use Pomodoro technique to force concentrated learning for 25 mins intervals, then 5 min break. I would say to myself ‘lets do 1 Pomodoro worth of reading’ and I would always end up doing more once i got started.
  3. Remove friction. know exactly what you’re going to work on each day. yup this means making a plan of what you want to learn in advance. I knew i was going to freeCodeCamp for a couple of months so this made it easy for me.
  4. write down all your ideas of things you want to build and learn. every single one. Then prioritise and give your self a deadline. Here’s how I organise mine - https://alexchiu.notion.site/af99042d8d314a9bba17b522bc16671c?v=4504ef02aef04c8690c6f3dde403fc3d…
  5. Motivation is overrated. Build habits. Try and make coding a part of your daily routine. I highly recommend reading Atomic Habits where he talks about this in great detail - https://jamesclear.com/atomic


  • Thinking a bit about the stack architecture for building-a-headless-dropshipping-site

    • After a user has made a purchase to an item, we should store this transaction.
    • The transaction should store the state ie. pending , failed , paid
    • only when the transaction is in paid state we have the funds in our account. Then we can forward the order to printful.
  • starting to see the value of something like snipcart. not only does it handle the frontend cart UI, but it also acts a store transactions backend…



  • lots research into snippcart vs alternatives. Everyone wants a slice of the pie when it comes to eccomerce!
    • snipcart charges 2% of every transaction but:

      *For clients with monthly sales under $ 629 CAD, the 2% will be replaced by a $ 13 CAD monthly fee. Note that you’ll be charged in USD.

    • shopify lite seems to be the only other interesting competitor but still not doing exactly what snipcart do with regards to a “headless” shopping cart which also integrates with other services.

struggling to really understand why we have these middle-men aside from the fact they handle tricky parts like VAT and linking up payment gateways. snipcart also has some other added benifits like analytics and empty shopping cart strategies…


  • This is an awesome repo demoing how to integrate Printful ,snipcart and a react frontend. Going to dissect this apart and try to build my own version.

  • down the rabbit hole I go… VAT and Printful😪

    • try to get a VAT ID so you dont have to pay 20% to europe
    • Don’t worry about anything until I’ve sold over €94 741 in UK or €78 022 to another EU state.


  • Getting a lot more comfortable with framer motion. The main mental is that framer motion doesn’t mind you unmounting react components, with <AnimatePresence/> it will handle transitioning between it unmounting state. So just think about a components initial state, its animating state and it’s exit state.


Senior developers are just regular developers that have “seen-your” code a hundred times before.


  • tiny-invariant is neat TS helper for managing data that either null or undefined, but you know with confidence that the data should always be ok. If its not you accept your programme will blow up. This can help save many lines of of type guarding logic.


  • Remember to class to expose and attach className to styled component so you can extend an existing styled component. object oriented css baby!

  • combine lodash/uniqwith and lodash.isequal to dedupe an array of objects.
import uniqWith from 'lodash/uniqwith';
import isEqual from 'lodash.isequal';

const arrayOfObjects = [
		id: 1
		content: 'a'
		id: 1
		content: 'a'
		id: 2,
		content: 'b'


return uniqWith(arrayOfObjects , isEqual)



#typescript fun

  • return the type of a resolved promise

export type AsyncFuntionType = Awaited<ReturnType<typeof yourAsyncFunction>>

if it returns an array, simply get the first time from it

export type AsyncFuntionType = Awaited<ReturnType<typeof yourAsyncFunction>>[0];


  • losing my mind over page transitions in next.js and framer motion. trying to orchestrate when animation should be fired is one of the most mind boggling coding task ever. Trying to make animations as predictable and robust is hard because you expext events fired by these frameworks to work 100% , but when they dont it completely throws off your mental model of how the animations flows from step to another…


  • Remix with its nested routes sounds like it will solve a bunch of issues I’m facing with Next.js. when global state has to be fed in from the parent component. it means every corresponding page has to run the same data fetching logic. Really need to get around to trying out remix soon…


  • basic shopify store setup pretty quickly. Realised its $24 a month to run the store. I can understand why they charge this as it was literally so easy to set up a tshirt store with integration to printful. It was all set up in under 30 mins .

  • Going to wire up the integration for with printful and code the UI from scratch to save a lot of money!!


  • pagination is hella tricky.. still trying to get my head around why this skip logic works…
 let pagesToSkip =  CUURENT_PAGE_NUMBER === 1 ? 


  • First time dealing with pagination with GraphCMS and Next.js. With static pagination there are few things to consider.
    • how many items appear per route.
    • Find the max total of items upfront (this is used to figure out how many buttons and pages should be generated!)
    • all view ports need to have the same amount of items!


  • Decided to go down the route of creating a page transition after the next.js has done the routing. but another hurdle is figuring our if it was page route or a hash route. you can track this with next/router . There are listeners to track both of these events


  • page transitions in next.js is hard becasue you have to think about when the transisition will play. When when you route to another page the element which was running the transition will be removed by React so:
    • do you delay the route and let the transistion complete?
    • or run the route, then somehow trigger the transistion?


  • trying to port my POC page transistions into a real next.js project is proving difficult . Almost got it working but with a slight bug, going between anchors on the same page doesnt animate?! this works perfectly in my POC though…



		x: 0,
		y: 0,
		scale: 1,
		rotate: 0,

  • The animate prop will animate css properties of that current div with the animate object

  • There are also a bunch of animation helpers or what framer calls “multiple gesture animation props” e.g whileHover, whileTap, whileFocus, whileDrag and whileInView . These lets us run animations when these gestures are active.

  • Variants, lets us reuse animations

        const variants = {
       	visible: { opacity: 1 },
       	hidden: { opacity: 0 },
    	<motion.div variants={variants} />


  • Leveraging react context for global state to store cms data like strings and urls. This way I can inject these strings directly into components without passing props all over the place.

  • the cms context :

interface CmsData {
	homePageContent?: HomePageContent;
	aboutPageContent?: AboutPageContent

type ContextInterface = [
	cmsData: CmsData | undefined,
	setMessages: React.Dispatch<React.SetStateAction<CmsData | undefined>>

const CmsContext = React.createContext({} as ContextInterface);

export const CmsProvider: React.FC = ({ children }) => {

	const [cmsContent, setCmsContent] = useState<CmsData | undefined>(undefined);

	return (	
		<CmsContext.Provider value={[cmsContent, setCmsContent]}>


using useContext we can make hook to set and get the cms data:

export function useCmsContent(content?: CmsData) {

const [cmsContent, setMessagesState] = useContext(CmsContext);

useEffect(() => {

	if (!content) {	



return cmsContent;


with this in place, can set data on first page load…

const Home: React.FC<HomeProps> = ({ homePageContent }) => {

// homePageContent is data from next getStaticProps
useCmsContent({ homePageContent });

return (
		{/*...truncated for brevity... */}


then in any children components we can use the data easily without having messy props all over the place:

const DeeplyNestedComponent: React.FC = () => {

	const cmsData =  useCmsContent();
	return (



  • How to carry on using env.local with GQL CodeGen
`"codegen": "DOTENV_CONFIG_PATH=./.env.local graphql-codegen -r dotenv/config --config codegen.yml",`


  • kaboomjs is a super fun and easy way to make games with JS. It injects globals into a browser which is kinda weird, but makes for a fun dev experience


  • I wonder if prismic is actually better than graphcms… page schema and component plugins for rich editors seems be absent from graphcms and something that is really useful



  • Click through an overlay div element with pointer-events: none; . Useful when there is an overlay positioned absolute over your content which you still want the things underneath to be clickable.


  • Not strictly zero days the last couple of day. Been working like crazy for work for a tight deadline! I’ve learnt so many things , mainly niche safari things

    • dont use css gap
    • power saving mode causes all videos to pause and look ugly
    • webkit-box is not standard but fixes things?!
  • Also just accelerated my mentoring, TS and React skills recently.


  • github co-pilot taught me a stonker of css trick today:

Add ellipsis to truncate long text after x amount of lines:

  overflow: hidden;
  text-overflow: ellipsis;
  display: -webkit-box;
  -webkit-line-clamp: 2;
  -webkit-box-orient: vertical;


  • I want typescript code-gen for graphcms but not sure how to achieve this yet. I am really enjoying graphcms though . I’m moving very fast with it. Love the auto generated gql api.


  • Graphcms stuff for work. what are you supposed to do with global level or page specfic schemas when you only to update it in only one places and there should not be any other subsequent records? 🤔


  • github co-pilot taught me a neat trick to select all elements except the first child
/* select all children except first child */

.select-children:not(:first-child) {


  • Neat trick to handle form data using FormData. It converts form data into a js object.
  const handleSubmit = async (event: any) => {

    const form = new FormData(event.target);

    const formData = Object.fromEntries(form.entries());

    await fetch('/api/contact', {
      body: JSON.stringify(formData),
      headers: {
        'Content-Type': 'application/json',
      method: 'POST',

    // const result = await res.json();


  • On a pretty stressful project for work right now with a tight deadline. Practicing keeping calm and staying organized.
  • Having design tokens setup up-front seems like it is paying off.


  • Revising GSAP scroll trigger again. All tutorials generally target DOM elements with querySelector , but GSAP works perfect with React refs.
  • when installing on a react app/next.js. we need to remember to import gsap and ScrollTrigger, then finally register the plugin. I think registration only needs to happen once, but it’s ok to register multiple times and it will treat it as a singleton.
import gsap from 'gsap';

import { ScrollTrigger } from 'gsap/dist/ScrollTrigger';



  • how to type your styled-components theme:
import {} from 'styled-components';
import theme from '../theme';

declare module 'styled-components' {
type Theme = typeof theme
export interface DefaultTheme extends Theme {}

Had a quick look at GraphCMS and it’s shockingly similar to PrismicCMS. It’s much more GQL orientated which is fine as I would use Prismic with GQL too.

Like Prismic, they also provide a rich text rendered component with pretty much the exact same API - https://graphcms.com/blog/graphcms-react-rich-text-renderer



Reading up on Next.js getStaticProps and getStaticPaths. Nice tutorial https://spacejelly.dev/posts/how-to-create-pages-in-next-js-with-static-dynamic-data/#step-2-using-getstaticprops-to-statically-generate-a-page-with-dynamic-data

Not quite figured how to infer the types when consuming these static props in the component… Interesting suggestions here but nothing I like, https://stackoverflow.com/questions/65078245/how-to-make-next-js-getstaticprops-work-with-typescript


  • It’s a good idea to lint your project before committing , this can be done with husky git hook. This lets you run commands at certain stages of your git workflow
  • we can use husky pre-commit hook to run a lint on files before committing to a repo. But this will lint all your files in your project, even the ones that you are not committing potentially blocking your work which can be pretty annoying
  • lint-staged, this lets us run a command for only files that have been staged. We can configure husky to run lint-staged instead.

  • CSS transitions when changing routes with Next.js can be really tricky due to fact changing routes on next cause a full page refresh, therefore if you try to add an onClick handler to append a class to an element the following happens:
    1. click element too add a class with an animation.
    2. the animation starts…but…
    3. the whole page re-renders cutting off the animation

so how can solve this?

Can we either run the animation i.e add the class after everything has finishing re-rendering?

i dont think so…

Or can we ensure the animation has time to run before the whole page re-renders.

next-page-transitions, solves this.

it solves the problem of making sure only one page component is mounted at a time and that the next page isn’t mounted until the previous one has completed its exit animation.

The timeout prop is particularly useful. It lets you delay how long it takes for the next page to render.


  • OGL a 29kb alternative to three.js (which is 149.1kB)

  • Got a neat on hover animation effect working with css - https://codepen.io/chiubaca/pen/gOGzEve?editors=0100
    • reversed engineered the ‘Shutter Out Vertical’ effect from Hover.css.
    • This exercise surprisingly stretched my css skills more than i though. at a high level I had to absolute position the ::before pseudo element, this gives you a free div element to play with to create a shutter effect by transitioning the transform:translateY(0) to transform:translateY(0).
    • When we apply the transition: all 1s to both the hover state and regular state the animation transition both on hover and out. thanks css tricks! - https://css-tricks.com/different-transitions-for-hover-on-hover-off/


  • More work on hacker.fm and svelte. tackling some svelte specific bugs. When a track is playing and we switch to a different track we need to manually handle pausing and replaying the audioElement, find it weird this was’nt a problem with vanilla js + react. Same kinda issue when seeking a track. When trying to seek I need to manually handle pausing the track and resuming if the track was being played before trying to seek…


  • Enjoying svelte.js the more I use it. It feels more Vue.js but with much more concise syntax. The out the box two-way data bindings for elements like audio is amazing and has made syncing up state trivial. Store are are just updateable objects no need to think about the flux pattern which i’m not sure is a good or bad thing yet.


  • Starting to get to grips with the ergonomics of svelte. Using custom template syntax feels odd after using jsx so much, but the syntax is intuitive enough.

  • custom use action directives are powerful - https://svelte.dev/docs#template-syntax-element-directives-use-action

    • This remove the need to bind an element to ref then pass it to a third party lib. I like how it has handles for updating and destrying an element.

    Should any elements that are fed into a third party lib use use directives?

  • found a nice svg plugin for svelte - https://github.com/poppa/sveltekit-svg



  • Thinking a little bit about my work presentation for the new year [[permanent-notes/making-eco-friendly-websites]].

    • The Preact compat layer lets you swap out React layer for Preact for existing React codebases. My understanding so far is that it provides the exact same API without any compromises. We did this for Penso and there was not issues and we’re using the usual react features such as context, various hooks like useState ,useEffect third party libs like styled components.

    • Worth noting that the preact-compat layer is an additional 2kb overhead. But preact is so small already at 3kb that 5kb in total is not so bad compared to reacts ~120kb

    • I like this summary why less code is better from https://robertknight.me.uk/posts/preact-react-tradeoffs/

    Having less code makes everything which processes that code
    run or start faster. For developers this includes build times, test
    execution times, continuous integration cycle times and how long
    it takes the application to load in development. Fast cycle times
    make for happier and more productive developers. For end users,
    it affects how long it takes the application to load in their browser,
    particularly on lower-end devices and slower networks.

  • I cant find any tradeoffs with using preact over react which does make me question why we dont just use preact for everything?!

    • A big one is that it will always be slower to get new react features, such react server component.
  • the preact-compat layer provides 100% api compatibility with react.


  • Using css graidents heavily in my new blog. learning about some caveats, such as you cant just wrap an css into a variable. eg this doesnt work.
 --background-gradient: linear-gradient(to right, #24243e, #302b63, #0f0c29); 
   background : var( --background-gradient )

  • Struggling a little a bit to figure out how to best compose my css variables for my entire site. Right now it looks something like this

*  _colours.scss

// These are resuable varible only to be consumed in this file.. becasue there is repitition between the data properties and 
  --dark-bg: #161616;
  --dark-text: #dbdbdb;
  --dark-code: #f5f5f5;
  --dark-code-block: #6e6c6c;
  --light-bg: white;
  --light-text: #093050;
  --light-code: #740101;
  --light-code-block: #d1d1d1;

@media (prefers-color-scheme: dark) {
  :root {
    --background: var(--dark-bg);
    --primary-text: var(--dark-text);
    --secondary-text: var(--dark-text);
    --links: var(--dark-text);
    --code: var(--dark-code);
    --code-block: var(--dark-code-block);

@media (prefers-color-scheme: light) {
  :root {
    --background: var(--light-bg);
    --primary-text: var(--light-text);
    --secondary-text: var(--light-text);
    --links: var(--light-text);
    --code: var(--light-code);
    --code-block: var(--light-code-block);

// Overide system preferences
html[data-theme="light"] {    
  --background: var(--light-bg);
  --primary-text: var(--light-text);
  --secondary-text: var(--light-text);
  --links: var(--light-text);
  --code: var(--light-code);
  --code-block: var(--light-code-block);

html[data-theme="dark"] {
  --background: var(--dark-bg);
  --primary-text: var(--dark-text);
  --secondary-text: var(--dark-text);
  --links: var(--dark-text);
  --code: var(--dark-code);
  --code-block: var(--dark-code-block);


  • Worked on migrating my blog over to Astro.js. Still loving the ergonomics of Astro. Composing HTML blocks feels really like a massive productivity boost. JSX-like but without the overhead.


  • Oh boy third #zero-day in a row! It’s xmas after all! My news years resolution is not be too hard on myself. Lets enjoy this xmas period and be present in the moment.


  • React server components (RSC) !== to server side rendering (SSR).
    • RSC complements SSR.
  • RSC streams in the UI diffs and React uses this data to update the UI
  • RSC are really good when you need to run heavy js libs and stream the change back to the client so the end client does not take the network hit of downloading the library.


  • Two way data binding in svelte is too easy! Works basically exactly the same as v-bind in vue
	let name = 'world';

<input bind:value={name}>

<h1>Hello {name}!</h1>

In the DOM, everything is a string. That’s unhelpful when you’re dealing with numeric inputs — type="number" and type="range" — as it means you have to remember to coerce input.value before using it.

With bind:value, Svelte takes care of it for you:

<input type=number bind:value={a} min=0 max=10>
<input type=range bind:value={a} min=0 max=10>

bind works across pretty much everything you expect it to including textarea input select and even on elements that support textContent with contenteditable. useful for when binding innerHTML:


Svelte provides reactive bindings to audio and video elements The complete set of bindings for <audio> and <video> is as follows — six readonly bindings…

  • duration (readonly) — the total duration of the video, in seconds
  • buffered (readonly) — an array of {start, end} objects
  • seekable (readonly) — ditto
  • played (readonly) — ditto
  • seeking (readonly) — boolean
  • ended (readonly) — boolean

…and five two-way bindings:

  • currentTime — the current point in the video, in seconds
  • playbackRate — how fast to play the video, where 1 is ‘normal’
  • paused — this one should be self-explanatory
  • volume — a value between 0 and 1
  • muted — a boolean value where true is muted

Videos additionally have readonly videoWidth and videoHeight bindings.

get a reference to a DOM elements using bind:this={canvas}. This similar to refs in React.

Note interacting with reference needs to be done after the component is mounted. Therefore, the logic all needs to happen within the onMount lifecycle hook.

component bindings are kinda mind blowing. It provides an API to get access to a childs prop

I’m still trying to get my head around binding to component instances - https://svelte.dev/tutorial/component-this

Lifecycle events are almost the same as vue.js

onMount & onDestroy - do something after component is mounted and when it is unmounted/destroyed

beforeUpdate & afterUpdate - Do something before or after the component is updated great example use case here https://svelte.dev/tutorial/update.

tick - is a promise that can be called anytime to ensure that any pending updates have been completed before running something next. - https://svelte.dev/tutorial/tick

svelte stores is sveltes built-in global state management solution. Sort of like react context or Vuex, but it’s much easier to reason with!

writables - is a mutable reactive object which has a set , and update methods

readables - is a read only object. It is instantiated as a function , first arg is the inital value, the second is a callback which provides a set and stop functions. set is called on the first subscription. stop is called on the last unsubscribe

derived - is a way to leverage the value from a different store - https://svelte.dev/tutorial/derived-stores

Nice way to keep store logic “clean” - https://svelte.dev/tutorial/custom-stores . A “custom” svelte store only needs to be an object that exposes a subscribe method and it is automatically a svelte store.

bind works on writable svelte stores - https://svelte.dev/tutorial/store-bindings


  • bash/terminal shortcuts
  1. ctrl-a - move the cursor to the beginning of the current line
  2. ctrl-e - move the cursor to the end of the current line
  3. alt-b - move the cursor backwards one word
  4. alt-f - move the cursor forward one word
  5. ctrl-k - delete from cursor to the end of the line
  6. ctrl-u - delete from cursor to the beginning of the line
  7. alt-d - delete the word in front of the cursor
  8. ctrl-w - delete the word behind of the cursor


  • learning.chiubaca.com is now notes.chiubaca.com
  • Final touches on the automation of this website. I can now:
    • Sync my notes across different machines using google drive and AutoSync
    • ctrl +shift + s , will pull and push changes to github all withing obsidian
    • onpush github actions is configured to hit a redeploy netlify webhook which will redeploy notes.chiubaca.com with the latest notes.


  • Massive anxiety attack in the middle of the night. Read a couple of articles.

    The difference between good stress and bad stress is the chance to return to equilibrium (homeostasis), which has beneficial effects in itself. This is the kind of anxiety i get in the middle of the night and the only way to recover is to get out of bed to chill out.

    We live in our heads a lot of the time, and are happy that way…this means that we can become a bit disconnected from our bodies. 100% me in the last 4 years…

    • Actions from the article
      • exercise
      • breathing exercises
      • remove yourself from the stressful situation

  • Started working on removing my obsidan notes away from the learning.chibaca.com astro codebase and have actually made good progress. Simply git clone the notes and move the articles into right astro directory works surprisingly well!


  • Very impressed with forestadmin.com . It’s a sort low-code way to edit data in your database (I think similar to Airtable?). The setup has been incredibably easy. 1 - create a forest admin account to access the dashboards
    • The same login lets you manage multiple teams and backends

2 - install the server clinent a - connect it to the dashboard with like 1 or two commands

Thats it! Once it’s configred you can apply model validation rules via the server for server side validation. What’s neat is you have client-side validation which is all configurable via the dashboard.


  • Working with webpack analyser today and got a cool script working to extract the file sizes from the json output

const fs = require('fs');
const DATA = require('../stats.json');

function getFileSize(file) {
  const { size } = fs.statSync(`.next/${file}`);
  console.log(`${file} is ${size} bytes`);
  return size;

const fileSizes = DATA.chunks.map(element => getFileSize(element.files[0]));

const res = fileSizes.reduce((total, num) => {
  return Number(total) + Number(num);
}, 0);

    size_in_bytes: res.toFixed(2),
    size_in_kilobytes: (res * 0.001).toFixed(2),


  • Feels surprising good to be back on react project after 3 weeks of twig/php! Had bit of a revelation around how React works vs Vue. React is Pull, the reactivity is happening via re-runs of functions so your mental model needs to be ‘how do we react, once this function re-runs’ Whereas in Vue, its Push system. We mutate variables and the things automatically update, We’re not reacting to a change.


  • Learning about intersection observer API again with Kevin Powel - https://www.youtube.com/watch?v=T8EYosX4NOo
  • IntersectionObserver detects when a specfic element is interesecting within the viewport. It does not let you detect when one element is intersecting another element.
  • Basic boilerplate:
const sectionOne = document.querySelector(".section1")

const options = {
  root: null,  // the view port
  threshold: 0.5, // default 0 -  a percent e.g 0.25 = 25% . how much of the element needs to be in view before the IntersectionObserver is triggered.
  rootMargin: '0px' // The margin of the viewport. increasing this make the viewport more narrow. works like css eg. '10px 10px' .

const observer = new IntersectionObserver((entries, observer)=>{
    console.log('entry: ', entry.target)
}, options)



  • The next/image component is pretty awesome. I always knew it was clever but the more I drill into it, the better it gets
  • If your app is deployed to Vecel, it will do the image optimisation for you using Vercels edge networks. i.e it will optimise the images via its edge functions on the fly to give users the optimum image their browser can support. https://vercel.com/docs/concepts/next.js/image-optimization#edge-network-optimization
  • Other things it does under the hood:
    • Automatically serves the images in modern image formats like WebP, which is about 30% smaller than JPEG, if the browser supports it.

    • Optimize images on demand. Build time won’t change.

    • Lazy-load the image inside the page only when a certain threshold is reached while scrolling the viewport.

    • You can decide to use it with Vercel’s internal CDN or other providers like Cloudinary or Akamai.

    • You can specify different image sizes for different and custom resolutions to be used dynamically.

    • Automatically changes the quality of the photo to a lower threshold set to 75%. This can be changed for each invocation.



  • Learning about realistic renders in three.js and it’s just like configuring a PC game settings
    • Lighting is key

    • envinronment mapping on the background but also applying the envinronmet map back on 3D object mesh can give good results,. Very easily done with scene.environment = environmentMap;

    • There are a lot of renderer settings!

      • outputEncoding
      • toneMapping
      • shadowMap
      • antialiasing
    • It’s a good idea to enable these all run them through dat.GUI.



  • Learning how to import 3D models into Three.js. It’s surprisingly easy! Even though there are a lot of 3D formats the industry standard is heading towards gltf which can either be a readable json format and/or a binary format.
    • there are are a few gltf variants:
      • gltf
      • glTF-Binary
      • glTF-Draco
      • glTF-Embedded
  • to import a gltf into three.js
import * as THREE from "three";
import { GLTFLoader } from "three/examples/jsm/loaders/GLTFLoader.js";

const gltfLoader = new GLTFLoader();
  (gltf) => {
  (progress) => {
  (error) => {

// ....


  • By deault in Three.js, the camera perspective is “looking” directly down the z-axis = “up” and “down” is the y axis

    • “left” and “right” is the x axis
    • zoom “in” and “out” is the z axis
  • It’s usually a good idea to try and normalise event positions to go from 1 - 0 to -0.5 to 0.5

  • A mouse parallax effect can created easily by animating the position of a 3D object based on the position of the mouse. When combined with clamping of values above we can constrain the movement.

  • often a parralax technique can feel “mechanical” . “lerping” is an effect that that make the move lag slight to make it feel smoother.

  • The idea behind the formula is that, on each frame, instead of moving the camera straight to the target, we are going to move it (let’s say) a 10th closer to the destination. Then, on the next frame, another 10th closer. Then, on the next frame, another 10th closer.

    cameraGroup.position.x += (parallaxX - cameraGroup.position.x) * 0.1
    cameraGroup.position.y += (parallaxY - cameraGroup.position.y) * 0.1
  • it’s probably a good idea to normalise animation in a tick function with the delta time. this will make things consistent on high frequency monitors…


 * Animate
const clock = new THREE.Clock();
let previousTime = 0;

const tick = () => {
  const elapsedTime = clock.getElapsedTime();
  const deltaTime = elapsedTime - previousTime;
  previousTime = elapsedTime;

  for (const mesh of sectionMeshes) {
    mesh.rotation.x += deltaTime * 0.1;
    mesh.rotation.y += deltaTime * 0.12;

  // Animate camera
  camera.position.y = (-scrollY / sizes.height) * objectsDistance;

  const parallaxX = cursor.x;
  const parallaxY = cursor.y;

  cameraGroup.position.x +=
    (parallaxX - cameraGroup.position.x) * 5 * deltaTime;
  cameraGroup.position.y +=
    (parallaxY - cameraGroup.position.y) * 5 * deltaTime;

  // Render
  renderer.render(scene, camera);

  // Call tick again on the next frame



  • Back on Three.js journey and learning about about scroll based animations.
    • Working with THREE.MeshToonMaterial is quite fun. It creates a cel-shaded cartoon effect. Its important thats it’s used along with a texture file.
      • A texture file is simply an image file with some pixels that define the colour ramp
    • the texture also needs to have it’s magFilter set to THREE.NearestFilter which can be used for the gradientMap for MeshToonMaterial.
// Texture
const textureLoader = new THREE.TextureLoader();
const gradientTexture = textureLoader.load("textures/gradients/5.jpg");
gradientTexture.magFilter = THREE.NearestFilter;

// Material
const material = new THREE.MeshToonMaterial({
  color: parameters.materialColor,
  gradientMap: gradientTexture,


    "title": "Asset Metadata",
    "properties": {
        "name": {
            "description": "Identifies the asset to which this NFT represents"
        "description": {
            "description": "Describes the asset to which this NFT represents"
        "image": {
            "description": "A URI pointing to a resource with mime type image/* 
            representing the asset to which this NFT represents. Consider making 
            any images at a width between 320 and 1080 pixels and aspect ratio 
            between 1.91:1 and 4:5 inclusive."

notice the image field, which is a link to an image. If we take example Project Memento, an NFT allowing change tiles you own to the desired letter. You basically trade “coordinates” on the website and there is NO NEED of an image for the project itself to work.

We could store them on the blockchain, but that would be excessively expensive 
  - The cost of data storage is 640k gas per kilobyte of data 
  - The current gas price is approximately 50 Gwei (or 0.000000050 ETH).
  - At today’s price (30th October 2021), 1 ETH is approximately $4000 
  - Each kilobyte for an image would cost $40
  • Worth noting there are NFTs thare do store assets onchain like cryptopink and cryptokitties. They this did before fees went crazy

  • My personal take is that, it is the perceived intrinsic value of NFT that is make the opinions of them so polarising.

    is web 3.0 DRM 2.0? - https://twitter.com/jshbz/status/1442611217000849408


  • Well i spent one half the day learning about raycasting in three.js then I got completley side tracked and forked cloud-to-butt and created web3-to-butt 😆

( Ok so did end up finishing the threejs journey on raycaster )

  • Raycaster is a technique that can be used to pick up mouse clicks in a 3D space. it sort of works by shooting a direct line a.k.a vector.

  • set up an raycaster with THREE.Raycaster();

  • this little boilerplate let you hover over 3D object and “react’ to the event

* Animate
const clock = new THREE.Clock();

const tick = () => {
 const elapsedTime = clock.getElapsedTime();
 raycaster.setFromCamera(mouse, camera);
 const objectsToTest = [object1, object2, object3];
 const intersects = raycaster.intersectObjects(objectsToTest);

 for (const intersect of intersects) {

 for (const object of objectsToTest) {
   if (!intersects.find((intersect) => intersect.object === object)) {

 // Update controls

 // Render
 renderer.render(scene, camera);

 // Call tick again on the next frame


  • mouse onclick events can also be intercepted here too but will require setting up another event listener.


  • Good video on a11y and html
    • only use one h1 on a page
    • think of using h tags to structure the order of your page like a text book. but not in regards to styling
    • h tags should work downwards sequentially
    • it’s ok to use multiple header and nav tags
    • sematic html creates landmarks that make it easier to skip over section for screen readers.

  • fun course on creating a three.js galaxy. The the technique entails builing a factory function which generates a bunch of particles based on a config. This seems to be a nice pattern to keep things tidy.
    • so much maths required in order to generate randomness and curvature of particles… 😕 Needs to learn more about . Do i need to relearn sin cos and tan again?

  • types arrays still confuse the heck out of me . but this what i understand so far
    • they act as an array to store a collections of specfic bytes
    • they’re used in conjunction with an ArrayBuffer
    • you cant access an ArrayBuffer directly. Instead you need to use a DataView


  • to support AudioContext on safari we need to use webkitAudioContext
  • in TS it looks like this
  let AudioContext = window.AudioContext || (window as any).webkitAudioContext;
  this.audioContext = new AudioContext();


  • generate random strings with node.js built in crypto lib
const Crypto = require('crypto')

function randomString(size = 21) {  
  return Crypto
    .slice(0, size)

console.log(randomString()) // '/VuPgyBlk/aZjPADhMqQk'


Design systems help maintain, govern and preserve brand standards. Colors, typography, horizontal & vertical spacing and rhythm, as well as numerous other design aspects that can be mutated over time through various iterations of design. If design aspects are changed consistently, there is a possibility of causing negative impact to a brand’s image, website or application and can appear disheveled. Additionally, the brand can lose credibility with these types of inconsistencies.

  • Tier 1: Typography & color styles, spacing rules

  • Tier 2: Atom level components using styling from Tier 1 which may include buttons, dropdowns and input fields

  • Tier 3: Combining elements of tier 2 components to create more complex patterns.


  • Create a css grid using background-image
.grid {
  width: 100%;
  height: 100%;
  background-image: repeating-linear-gradient(
      transparent calc(1px + var(--grid-blur)),
      transparent var(--grid-size),
      var(--grid-color, black)
        calc(var(--grid-size) + 1px + var(--grid-blur, 0px))
      var(--grid-color, black) 6%,
      transparent calc(1px + var(--grid-blur, 0px)),
      transparent var(--grid-size),
      var(--grid-color, black)
        calc(var(--grid-size) + 1px + var(--grid-blur, 0px))

then transfrom it to make it look like a flat plane:

transform: perspective(10vh) rotateX(47deg) translateZ(41px);


  • Tinkering away on hacker.fm. Got the media control implmented.
  • The audio html element can take multiple sources:
    <audio controls>
    <source src="myAudio.mp3" type="audio/mpeg">
    <source src="myAudio.ogg" type="audio/ogg">
    <p>Your browser doesn't support HTML5 audio. Here is
       a <a href="myAudio.mp3">link to the audio</a> instead.</p>


  • Learnt how to roll back git commits using git reset HEAD~ --hard
  • git reset HEAD~ --soft will roll back the commits but leaving the changes staged so you can re-commit the change with some tweaks if necessary.
  • git reset HEAD~{number} --hard can be used if you want to rollback several commits in one go


  • Been busy, but not sure if I’ve been productive lately. Lots of just-in-time learning but not been going deep on anything. Reminder that going deep on topic will open up new rabbit holes and unrevel new sources of inspo.

  • Finally got round to finishing the particles lesson on three.js journey. Some things fundamentally don’t understand.

    • setAttribute what are the attributes we’re setting? what does this look like? can it be anything we want?
    • Float32Array - a kinda magical primitive to me. it’s just an array of numbers right?
  • Learning a lot about blending 3d objects together. The main takeaway is that three.js doesnt necessary know what should be on top of each other. This can be worked around with blending, and tinkering with alphaMap, alphaTest, transparent and depthWrite properties of your material.


  • Had a little bit of fun porting my three.js audio visualiser to React. Got it wired up to Astro.js so the visualiser lazy loads into the page. Really impressed this just works…


  • Learning all about the limitation of email template development today. MJML is an interesting technology, but it’s still annoying not have the full capabilities off css to hand.

  • Ported my three.js audio visualiser logic to a react component and pleasantly surprised to see the class kinda just work when passnig HTML to it via react refs.


This is where the magic happens…

  createVisualizer() {

    this.audioContext = new AudioContext();

    if (!this.audioElem) {

    this.audioElem.crossOrigin = "anonymous"; // Otherwise the browser moans

    const src = this.audioContext.createMediaElementSource(this.audioElem);
    const analyser = this.audioContext.createAnalyser();

    analyser.fftSize = 128; // Bump to 256 to get small sample rate

    const vis = document.getElementById("vis");
    // Still dont understand this bit
    const bufferLength = analyser.frequencyBinCount;
    const dataArray = new Uint8Array(bufferLength);

    let barHeight: any;

    function renderFrame() {


      for (let i = 0; i < bufferLength; i++) {
        barHeight = dataArray[i]; // This is our magic number that will fluctuate up and down for our data viz!

        if (barHeight < 50) {

        vis!.style.height = `${barHeight + 100}px`;
        vis!.style.width = `${barHeight + 100}px`;



  • Fun building a 3D scene with just Three.js primatives e.g with just built in geometies like SphereGeometry and BoxGeometry. You can get pretty far.

  • Learnt about how to make high fidelity animations with theatrejs. First time i have seen a UI in conjunction with code to build animations. It actuall makes a lot of sense! It combines the best bits of dat.gui and GSAP to create an intuitive animation authoring experience.


Been playing with Astro some more and finally got my head around how to use npm modules client side in a .astro file. It’s not that obvious…

First thing I tried was something like this:

<!-- Test.astro -->
<canvas class="webgl"></canvas>
<script type="module"> 
 import * as THREE from 'three'

 console.log(THREE) //undefined :(


This returns Uncaught TypeError: Failed to resolve module specifier "three". Relative references must start with either "/", "./", or "../". in the console.

Astro doesnt let you import npm modules in inline script tags within .astro unfortunatley. However we can import in an external .js/.ts file, then make use of Astro.resolve like so:

<!-- Test.astro -->
<canvas class="webgl"></canvas>

<script src={Astro.resolve('./myScript.js')} type="module"/>

Inside myScript.js we can import things as expected.

// myScript.js
import * as THREE from 'three';

console.log(THREE) // Three.js module!

Working demo here.


  • Some more shadow fun in Three.js, there a bunch of shadow map algorithms that can be used:

  • THREE.BasicShadowMap Very performant but lousy quality

  • THREE.PCFShadowMap Less performant but smoother edges

  • THREE.PCFSoftShadowMap Less performant but even softer edges

  • THREE.VSMShadowMap Less performant, more constraints, can have unexpected results

  • Rule of thumb, try to avoid using dynamic shadows as it uses a lot of CPU. A good alternative is to use ‘baked shadows’.

  • A cool technique with baked shadows is add the a shadow texture to a mesh and place the mesh directly underneath a 3D object. You can adjust the opacity of the mesh to create different lighting levels.


  • why are fixed footers such a b%tch? Lets talk through the options…

  • absolute positioning sucks, becasue it means the footer potentially floats over some content at the bottom of the page

  • flexbox kinda works-ish. it means wrapping your content in extra divs and justifying vertical content with space-between. But this feels clunky and I’m getting a weird effect when the main content is centered to page, where I want it stick to the top.

  • grid might be the best way forward. but it’s not supported everywhere such as in a react-pdf context.


  • Learning about lighting in three.js

  • There’s a lot of different light sources available and positioning them will take experimentation. Make use of the THREE light helpers which visualise where the light source is coming from combined with dat.GUI to quickly move the light sources around.

  • Lights sources come at a computational cost and should be used sparingly

    Minimal cost:


    Moderate cost:


    High cost:

  • Consider Baking

    A good technique for lighting is called baking. The idea is that you bake the light into the texture. This can be done in a 3D software. Unfortunately, you won’t be able to move the lights, because there are none and you’ll probably need a lot of textures.


  • Starting to learn about use-gesture. Great little tutorial on https://www.youtube.com/watch?v=dBWhAGhw7wM

  • use-gesture tries to make it easy to add new gesture primatives such onClick and extend to lots more gestures like onDrag and onPinch so we can create much more native like experiences.

  • It works well with react with some out the box hooks. The boilerplate code is quite simple

function MyComponent(){

	const myDomRef = useRef()

		onDrag: doSomething(event)
		domTarget: myDomRef

	return <div ref={myDomRef}> hey </div>
  • the library does not do the work to move and transform dom elements, just provides the utilities to do so via the events exposed on the callbacks.
  • Within those callback it’s recommended to use an animation lib like Framer or React spring to create interesting touch feedback.


  • Progressively enhance your website using modern picture formats like webp and avi using the <picture> element. As <picture> allows browsers to skip images they do not recognize, you can include images in your order of preference. The browser selects the first one it supports.
 <source srcset="img/photo.avif" type="image/avif">
 <source srcset="img/photo.webp" type="image/webp">
 <img src="img/photo.jpg" alt="Description" width="360" height="240">


reactive declarations with the $ syntax is pretty neat.

not only is it used to create computed values:

$: doubled = count * 2;

but code can be reactively run if the dependency has changed too:

$: {
	console.log(`the count is ${count}`);
	alert(`I SAID THE COUNT IS ${count}`);

big gotcha with svelte is that array methods like pop, shift, unshift and splice don’t work as expected. Svelte needs a re-declaration to work


let numbers = [1, 2, 3, 4];

function addNumber() {
	numbers.push(numbers.length + 1);
} // DOESNT UPDATE `numbers`

function addNumber() {
	numbers = [...numbers, numbers.length + 1];

A simple rule of thumb: the name of the updated variable must appear on the left hand side of the assignment

  • props in svelte use the export keyword. little bit weird…

  • like with Vue.js , it has a it’s own syntax for html conditionals. It’s not so adamant about providing an index when looping, but it is encouraged. the syntax is slightly different.

{#each things as thing (thing.id)}
	<Thing name={thing.name}/>
  • suspense out the box!
{#await promise}
{:then number}
	<p>The number is {number}</p>
{:catch error}
	<p style="color: red">{error.message}</p>

TBC - 6.Bindings

Three.js only supports typeface fonts, its; possible to convert fonts with something like this - https://gero3.github.io/facetype.js/

  • The basic boilerplate for three.js fonts looks like this :
    (font) =>
        const textGeometry = new THREE.TextGeometry(
            'Hello Three.js',
                font: font,
                size: 0.5,
                height: 0.2,
                curveSegments: 12,
                bevelEnabled: true,
                bevelThickness: 0.03,
                bevelSize: 0.02,
                bevelOffset: 0,
                bevelSegments: 5
        const textMaterial = new THREE.MeshBasicMaterial()
        const text = new THREE.Mesh(textGeometry, textMaterial)

to animate and add other materials, we do it all inside the callback function.


  • in three.js land, materials are the abstractions over creating shaders to produce textures for 3D objects
  • Reminder: a Three.Mesh is comprised of a geometry and a material. The material is supplied as the second argument to the Three.Mesh constructor.
const material = new THREE.MeshBasicMaterial();
const sphere = new THREE.Mesh(new THREE.SphereGeometry(0.5, 16, 16), material);
  • Textures are simply image files that need to mapped (wrapped) around a 3D object
const textureLoader = new THREE.TextureLoader();
const doorColourTexture = textureLoader.load("/textures/door/color.jpg");
  • Then we supply can provide this to the THREE.MeshBasicMaterial constructor.
const material = new THREE.MeshBasicMaterial({
 map: doorColourTexture,
  • There’s a lot more params that can be provided to the MeshBasicMaterial options object. like transparency and wireframe see docs for reference

  • Note, materials for THREE.PlaneGeometry only get rendered on one side. this can be worked around with material.side = THREE.DoubleSide; . But means double the amount of triangles need to be calculated .

THREE.MeshBasicMaterial - lets you map your own textures

MeshNormalMaterial - displays a nice purple, blueish, greenish color

MeshMatcapMaterial picks colours relative the orientation of the camera, creating the illusion of a light source.(thus less resources required).

MeshDepthMaterial - displays a hue of colour dependent on depth of the camera.

You can use this material for special effects where you need to know how far the pixel is from the camera.

If we want to use something like the THREE.MeshLambertMaterial we need a real light source

const pointLight = new THREE.PointLight(0xffffff, 0.5)
pointLight.position.x = 2
pointLight.position.y = 3
pointLight.position.z = 4

MeshToonMaterial- give a cool cel shaded look!

MeshStandardMaterial uses physically based rendering principles. it supports lights but with a more realistic algorithm and better parameters like roughness and “metalness”.

The aoMap property (literally “ambient occlusion map”) will add shadows where the texture is dark. Useful for bring depth to textures.

To make this work we need to add another attribute to the geometry using setAttribute , which duplicates the existing uv attribute

sphere.geometry.setAttribute('uv2', new THREE.BufferAttribute(sphere.geometry.attributes.uv.array, 2))

Now we can control the intensity of the aoMap like so:

material.aoMap = doorAmbientOcclusionTexture
material.aoMapIntensity = 1
  • displacing textures is cool way to bring depth to 3D object based on the texture. to so we provide a texture to the displacementMap property and then we can mess around with the displacementScale
material.displacementMap = doorHeightTexture;

// crank up to exergate the texture
material.displacementScale = 0.1;

envMap or Environment maps can create nice mirror effect which map a surrounding texture into your mesh.

const cubeTextureLoader = new THREE.CubeTextureLoader()

const environmentMapTexture = cubeTextureLoader.load([

One of the best sources is HDRIHaven. To convert an HDRI to a cube map, you can use this online tool: https://matheowis.github.io/HDRI-to-CubeMap/


  • So it’s been a pretty interesting experience trying to publish my Obsidan notes without the use for something like https://obsidian.md/publish. The first problem is that obsidan uses double bracket syntax [[ stuff ]] which is not part of the markdown syntax. Luckily this can be turned off, so backlinks use proper md link syntax.
  • Next withing obsidan it’s important that New Link format setting is set to Relative path to file. This produces urls which will work will <a ref> tags.
  • The final problem is that the backlinks are reference .md files so urls will looks something like ../fleeting-note/20210928.md the .md make the url invalid, so we need to chop it off somehow…
    • Netlify redirects to the rescue.
    • With Netlify redirects we can redirect our invalid url to the correct url. For example
    /fleeting-notes/20200910.md  /fleeting-notes/20200910
    • But we need to this every single file, so I wrote a pretty simple script to generate the _redirects file.
  • Also been playing aroudn with Astro alot and it’s been pretty great. VS code gets little bit confused with .astro files sometimes but closing and reopening the file seems to fix it for now.
  • I love how it’s just html you’re working with but with the component architecture that we know and love.


So, after many many billions of dollars invested, it’s fair to ask “what problems have been solved?” Then, we should weigh those up against the costs: explosion in ransomware, acceleration of money laundering by crime syndicates, climate change. Are we winning?


  • Final touches to the new learning site. Having lots of fun with Astro.js, but running into a few niggles around the Astro class and using it the frontmatter. All-in-all I’m excited to explore astro in bit more depth.
  • Now time to focus on linking my notes up in Obsidian to build my second brain!


  • Porting all my journals to obsidian and will adopt the zettelkasten
  • Will also be migrating from 11ty to [[../index-notes/astro]].js. Always found 11ty a bit messy to use. The API for astro just seem a lot more intuitive and the bring-your-own-framework concept is perfect for progressive enhancement.


  • Loading textures into Three.js is super easy
const textureLoader = new THREE.TextureLoader();
const texture = textureLoader.load(
  • the texture can then be used inside the MeshBasicMaterial class.
const material = new THREE.MeshBasicMaterial({ map: texture });
  • Note there are also three callback functions that can hookeded into:
const textureLoader = new THREE.TextureLoader();
const texture = textureLoader.load(
  () => {
  () => {
  () => {

Oftenly used to create loading bar is created by utalising the THREE.LoadingManager class. There are callback which can be accessed

const loadingManager = new THREE.LoadingManager();
const textureLoader = new THREE.TextureLoader(loadingManager);

loadingManager.onStart = () => {

loadingManager.onLoad = () => {

loadingManager.onError = () => {
  • UV coordinates is the mechanism for specifying how textures get mapped onto a geometry. Premade geometries by Three.js have set UV coordinates which can be access via the attributes property on any Geometry classs. it will be a represented as a Float32Array

  • Textures can be remapped at runtime by tweaking the propertie on the texture object.

    • repeat.x/y - repeat a texture x amount of time on the x or y axis
    • offeset.x/y - offset a textureon either the x or y axis
    • offeset.wrapS - this is needs to be set with Three.MirrorRepeatWrapping
    • offeset.wrapT - this is needs to be set with Three.MirrorRepeatWrapping
    • rotation
    • rotation.center - changes to pivot point when rotating
  • Image filtering can change the appearance of texture to make them more or less sharp

    • minFilter has better performance but will stretch and blur your textures
    • magFilter will make your image looks sharp and remove any blurryness
    • Both needs to be set with Three.NearestFilter
  • texture resolutions must be by a power of two otherwise performance will suffer

  • jpg is smaller but lossy. png is lossless but heavy

  • places to get textures


  • dat.gui is good companion for three.js. It provides an API to mutate objects. Therefore it will force you to abstract key params for your three.js project into a params system, The easiest way is to simply have a params object which you can reference when using the three.js api.


  • There is some standard boilerpate code to ensure the canvas used for three.js is reactive to screen sizes
// get height and width from the window object
const sizes = {
  width: window.innerWidth,
  height: window.innerHeight,

window.addEventListener("resize", (e) => {
  sizes.width = window.innerWidth;
  sizes.height = window.innerHeight;

  //both the camera and renderer needs to notified and refreshed
  renderer.setSize(sizes.width, sizes.height);

  // importantly the aspect of also camera also needs to be udpated
  camera.aspect = sizes.width / sizes.height;

  // important for retina screen or an screens that have a pixel ration higher than 2. No need to accomdate for anything higher than 2
  renderer.setPixelRatio(Math.min(window.devicePixelRatio, 2));


  • finally getting around to playing around with interactive cameras in three.js, theres a lot of maths involved.

once we have our mouse event poistions normalised to -0.5 - 0.5 , we need to remember that a negative y position makes the camera go “up: therefore we need to invert the y axis

const cursor = {
  x: 0,
  y: 0,

window.addEventListener("mousemove", (e) => {
  cursor.x = e.clientX / sizes.width - 0.5;
  cursor.y = -(e.clientY / sizes.height - 0.5);
  console.log("hey", cursor);

inside our tick function we can simply update the camera with the cursor values

const tick = () => {
  //Update camera
  camera.position.x = cursor.x * 10;
  camera.position.y = cursor.y * 10;

  // Render
  renderer.render(scene, camera);

  // Call tick again on the next frame

to keep the position of the item in the center we do this:

camera.position.x = cursor.x * 10;
camera.position.y = cursor.y * 10;

using trigonometry we can make our object spin however many rotations we want. remember that Math.PI will make an object spin 180 degrees.

  camera.position.x = Math.sin(cursor.x * Math.PI * 2) * 3;
  camera.position.z = Math.cos(cursor.x * Math.PI * 2) * 3;

or just use the built in controls

import { OrbitControls } from "three/examples/jsm/controls/OrbitControls";

const control = new OrbitControls(camera, canvas);

There is also have smooth flow like control like this

control.enableDamping = true;

but also remember to update the your tick function with



  • I never use ref forwarding in react, but it’s a useful technique for transferring refs from another component into different component
  • By wrapping your component in a React.forwardRef function you can expose a ref.
    const FancyButton = React.forwardRef((props, ref) => (
      <button ref={ref} className="FancyButton">
    • We can then reference this ref outside of this component like so:
    // You can now get a ref directly to the DOM button:
    const ref = React.createRef();
    <FancyButton ref={ref}>Click me!</FancyButton>;
    • ref can passed around easily into any other component


  • Still chipping away at the three.js journey camera lesson! Lots magic numbers to remember
    • aspect ratio can be figured out like so:
    const sizes = {
      width: 800,
      height: 600,
    const aspectRatio = sizes.width / sizes.height;
    • When tracking the mouse, we need to covert the pixel coordinates to normalises matrix grid. So that mouse tracking behaves consistently acorss screen sizes.
    window.addEventListener("mousemove", (e) => {
      cursor.x = e.clientX / sizes.width - 0.5;
      cursor.y = e.clientY / sizes.height - 0.5;
      console.log("hey", cursor);
    • it doesnt necessarily have to be 0.5


  • Trying to chip away at lesson 7 of Three.js journey.

    • the last two args for PerspectiveCamera indicate near and far.

      The third and fourth parameters called near and far, correspond to how close and how far the camera can see. Any object or part of the object closer to the camera than the near value or further away from the camera than the far value will not show up on the render.



  • learning about different camera types in three.js
    • The Camera class is not meant to be used directly, instead there are some common camera types that can be used instead.
    • ArrayCamera - render multiple camera, sort of like in split-screen video games.
    • StereoCamera - mimics the eyes tto create 3D scenes for VR headsets
    • CubeCamera - can be used to be render your surroundings
    • OrthographicCamera - renders of your scene without perspective, so all elements have the same size regardless of distance from camera
    • PerspectiveCamera - simulates a real-life camera with perspective


  • Learning about animations in three.js
  • like in mapbox.gl. it’s all done with window.requestAnimationFrame(). I like the way Bruno puts it. requestAnimationFrame is not for animations, it’s for running a function on each frame.
  • Frames are vary from computer to computer, more power computers can run at higher FPS. Therefore it;s important to normalise animations by using a some sort of timing utility.
  • three.js has new THREE.Clock() build in with the .getElapsedTime() method returns back the elapsed time in seconds. With this normalised value we can guranteed with can animate object consistently regardless of the computer it is on.
  • Lots of maths tricks will be required to get cool animations tricks to happen. Remember that when rotating object Math.PI will rotate an object 180 degress. Math.sin and Math.cos can create a wavy animation returning back to 0
  • GSAP makes animating three.js objects a breeze!
// Move object back and forth
GSAP.to(mesh.position, {
  x: 2,
  duration: 1,
  delay: 1,
GSAP.to(mesh.position, {
  x: 0,
  duration: 1,
  delay: 2,


  • Three.js journey fun, positioning.

  • mesh and camera objects are inherited from the base Object3D class. which has the property position. the position has x, y and z properties to control the position on a canvas.

    • x is right - left
    • y is up - down
    • z is forward- back
    • mutating these values is synchronous, so execution order is important. it’s written in a very imperative way.

Useful methods🔗

  • mesh.position.length() returns the length of the object from the center of the scene.

  • mesh.position.distanceTo(camera.position) return the distance of the object to a given camera object

  • mesh.position.normalise() rounds up the length to whole number.

  • mesh.position.set() is a quick way to move the position of an object. it takes 3 args x,yand z.

  • View the a visual helper of the axes with AxesHelper(). blue = z , green = y m red = x. The length of the line represent 1 unit.

const axesHelper = new THREE.AxesHelper();
  • normalise your values and stick to it. e,g 1 === 1km. It can change from project to project, but just be consistent.

  • Scaling an object works in the same way:

    • mesh.scale which also has x, y and z properties
    • it has very similar properties to position
  • Rotations are slightly harder

    • you can do this with either rotation or quaternion
    • the rotation object is an Euler class. This requires you think about what axis the object is rotating on.
    • To rotate an object you have to use pi either 3.14159 or Math.Pi a whole rotation is pi x 2
    • Warning, you can get into “Gimbal” lock when doing too many rotations. this is where changing the rotations does nothing.
      • to work around rotate the x, y and z properties in a different orders. remember how imperative the execution order is rotation.reorder can you change the rotation orders by applying a string in the order you want e.g rotation.reorder('YXZ')
    • quaternion is a mathematical way that can get around these gimbal locks. It’s a representation of the rotation which will “just work”. This is a black box for now.
  • lookAt() is a really useful method to look directly at another Vector3 object. e.g mesh.position . you can do something like this camera.lookAt(mesh.position) so the camera tracks an object.

  • 3D objects get very complicated, so if you want to create lots of 3d object and group them use new THREE.Group(). Objects can be added to the group so that they can be rotated and scaled all at the same time.


  • Watched breaking bad software developer habbits
    • dont “own” your parts of the codebase, you should be able to work on any part of the stack
    • dont be afraid to break things . If you are, why? You probably needs more resilient tests in place
    • dont focus on the happy path, think of every way your app can break
    • dont think it’s someone elses problem. You have the power to change things


  • Started the first real coding excercise of threejs journey. starting slow with the basics.

  • Basics of a scene. we need a scene , geometry and material. We combine the geometry and material together to make a mesh which can be added a to a scene.

  • Next we need a camera that also needs to be added to the scene

    • the camera needs to be configured to have an FOV and aspect ratio.
    • the camera has a position property where you can configure the x,y,z position of the camera
  • Finally the scene and camera needs to be placed onto a renderer. There are many types of renderers but the main one is THREE.WebGLRenderer


  • Learning a more about error handling in GQL.
    • group your errors into categories. system errors and expected errors
    • expected errors can be handled ahead of time giving better UX
    • build errors into the schema and use union types to handle different conditions


  • Full stack apollo is starting to click. Had a bit of a revelation regarding the apollo cache. It’s just a very smart object store which saves network requests. It’s easier to work with the apollo cache than reasoning with the browser cache a lot of the time.


  • Still making my through “Intro to Apollo Client 3.0”

    • Use the fields property in the apollo cache typePolices to create computed fields which are dynamically created per API request. e.g you could combile a first and last name together like this.
    • When fetching multiples “things” via graphql, make sure everything is denormalised in the cache. to do this just ensure that there is a unique identifier via an id or u have specified it via the keyFields type policy. This important becasue, if you wanted to query just a singular of the “Things”. Apollo would still make a network request, even though it already has the information in the cache. To let apollo know that the data may already be in the cache we have to specify read rules for a the query in the InMemoryCache. Make use of the toReference callback function, which requires u to provide the __typeName and the unique identifier configured for that resolver. see 1h40.
    • delete specific parts of the cache with the evit() method.
  • Nullish coalescing operator ?? is handy. It’s similar to tge logical OR operator ||, but it also determines 0 or ''(empty string) as falsey.


  • Still really confused about how to update the apollo cache in a “optimistic UX” type way. Falling back on using apollo refetch query too much!


  • Lots of work in GQL land today. Finding some limitation with trying to write expressive rules for GQL to return a very specific data structure but not seeing a way how at the moment. I want to return an arrray with some very specific objects, no less, no more. Looking for a clean way to do this but hacking around the edges. Need to dive deeper in GQL syntax


  • back on the GQL / apollo training. Having a second look at setting up the backend from scratch. referening my old learning journal which was kinda meta.

  • This guide is still the best and most comprehensive i’ve come across so far which is specific to node, ts, apollo and prisma.

  • Funneling the prisma types into codegen, means you get all of prismas auto-generated types along with the types of your graphql scheama when writing your resolvers. It’s really elegant.

  • Learning how to use .graphql directly in a node project. stack overflow to the rescue

    • Looks like the way we do it a work is with @graphql-tools/load' which is also what stack-overflow recommends.
    • this works nicely for my basic use case
    import { loadSchemaSync } from "@graphql-tools/load";
    import { GraphQLFileLoader } from "@graphql-tools/graphql-file-loader";
    export const schema = loadSchemaSync("./src/schema/*.graphql", {
      loaders: [new GraphQLFileLoader()],
  • so many different flavours of apollo-server!

    • When starting out, just stick with apollo-server. if you need to server other endpoints on the same node server, swap to apollo-server-express there are also many other supported node backend including serverless!

  • Moving on to the front end of the tutorial

  • I am only just realising now you get GQL auto completion within the vscode if you have the apollo GraphQL client extension installed



  • Finished the full stack apollo tutorial

    • Learnt about how you it possible to use the apollo cache as a state management tool by making use of Reactive Variables via the makeVar function. You can define reactive varibles client side like so
    // Initializes to true if localStorage includes a 'token' key,
    // false otherwise
    export const isLoggedInVar = makeVar<boolean>(!!localStorage.getItem('token'));
    // Initializes to an empty array
    export const cartItemsVar = makeVar<string[]>([]);

    This can be acessed in Mutation hooks via any of the callback functions simply by calling the name of the reactive var as a function e.g isLoggedInVar()

  • Need to play around with cache.modify in more detail. This can let us modify the apollo cache in any way.


  • Working my through Apollos fullstack tutorial to go over apollo/GQL fundamentals.

If a declared field’s type is in [Square Brackets], it’s an array of the specified type. If an array has an exclamation point after it, the array cannot be null, but it can be empty.

the resolver for a parent field always executes before the resolvers for that field’s children.

  • it’s best to keep resolvers very small. offload complex logic to seperate xAPI.ts/js or xManagers.ts/js file . This way if there are any changes to your backend, resolver logic does not need to be touched.

  • Top level resolvers will still execute even if there are some custom resolvers which we have not written for yet. This is becasue Apollo defines a default resolver for any field you don’t define a custom resolver for.



  • Finally figured out what the different colours means when u turn on the react dev tools highlight updates when components render options in react dev tools.

The React Developer Tools highlights components that are re-rendering at a given point in time. Depending on the frequency of updates, a different color is used. Blue shows infrequent updates, ranging to green, yellow, and red for components that update frequently. source

  • Also something clicked in my head with graphql today. It suddenly makes sense.
    • schema , the what
    • resolver, the how


  • Fun with the apollo cache today. My man Ben Awad with knowledge again. Basicly three ways to utalise the apollo cache
    • adding a new record automatically updates the apollo cache
    • adding a refetch policy on on mutations forces the cache to update, however this means there are two network requests. not the end of the world…
    • you can manually ammend the apollo cache on mutation with the update callback, within this callback you have access to the cache object which has some additional methods you can utalised like edit and writeQuery.


  • Progress with react-dnd. Got a basic drag and drop functionality in a mini Kanban app. the thing that is confusing me right now is wheter to use a hook or context for state managment.


  • react-dnd has a slightly steep learning curve, but the actual usuage of the API is quite nice.
  1. first you need to wrap your component around in the DndProvider and provide it with the HTML5Backend becasue we’re working with web pages.
import React, { Component } from "react";
import { render } from "react-dom";
import { DndProvider } from "react-dnd";
import { HTML5Backend } from "react-dnd-html5-backend";

import "./style.css";

function App() {
  return (
    <DndProvider backend={HTML5Backend}>
      <div class="app-container">....</div>

render(<App />, document.getElementById("root"));
  1. Now we have access to the react-dnd hooks like useDrag and useDrop
  • with useDrag, we deconstruct the react-dnd props in the first item of the array, in this example it
      const [{ isDragging }, drag] = useDrag(() => ({
        type: 'CARD',
        item: { test: 'test', type: 'CARD' },
        collect: monitor => ({
          isDraggining: !!monitor.isDragging
  • the first item in the array could be any any arbitary prop which is “collected” by the collect callback function.


  • Trying to get my head around react-dnd. Making a start on this youtube series

where dnd sits between the dom, vdmon and react

Dnd Concepts🔗

  • Backends

    • confusingly, not something used in the backend e.g node.js. backend refers to events/intereactions that can be tapped into between the DOM and React-dnd layers. This can include:
      • HTML events
      • touch events
      • testing Dnd interactions
      • Custom backends can also be created!
  • Item Types

    • used as a hint for dnd so there information about what the item being dragged on or being dragged is
  • Monitors

    • Communicates events from the DOM to React components. This tapped into on the React side via a context.

Interactions between React🔗

  • Collectors

    • these are functions when turn Dnd events coming from monitors into react props. This refered to as “collecting props”
  • Drag Sources

    • draggable components are known as a drag sources
    • drag sources contain information passed on from drop targets.
  • Drop targets

    • components which are areas to things to dropped on are drop targets
    • they can execute function from Dnd events


  1. there are DOM events
  2. these get communicated to Dnd via monitors
  3. monitors are coverted to state which is available to react via a context
  4. we use collecting functions to covert the state in the context into props- known as “collecting props”
  5. we use the props as normally would in any react component


Props and composition give you all the flexibility you need to customize a component’s look and behavior in an explicit and safe way. Remember that components may accept arbitrary props, including primitive values, React elements, or functions.

If you want to reuse non-UI functionality between components, we suggest extracting it into a separate JavaScript module. The components may import it and use that function, object, or a class, without extending it.


  • This thesis on gridless design is excellent. It’s not about css grid but the design grid. Designing around specific viewports is probably a bad idea in the world of web. Web design is fundamentally fluid and should bend at will to all viewports.

    no designer can anticipate all possible layouts. Nor should an engineer attempt to code layout exactly as specified by design


  • re- reading Good Bye Clean Code. Abstraction comes at a cost.

    My code traded the ability to change requirements for reduced duplication, and it was not a good trade

  • the article also touches upon apathy for your fellow developers whom you work with. how would you like it if someone smashed down your lego empire to rebuild it their way

    I didn’t talk to the person who wrote it. I rewrote the code and checked it in without their input. Even if it was an improvement (which I don’t believe anymore), this is a terrible way to go about it. A healthy engineering team is constantly building trust. Rewriting your teammate’s code without a discussion is a huge blow to your ability to effectively collaborate on a codebase together.


  • Dan Abramov’s overreacted news letter is just full insightful opions.

on let vs const. DA concludes just use whatever has been enforced in the codebase, it probably does not matter what you use. But it is at least worth understanding some of the subtle differences between them.

tldr use linter but…

Finally, remember that linters exist to serve you. If a linter rule annoys you and your team, delete it. It may not be worth it. Learn from your own mistakes.


  • I thought The Devil’s Albatross is such an intersting idea for making responsive designs withough media queries. It make use of flexbox’s grow property to force elements to wrap in the condition where they can’t fit in the viewport anymore.


  • Reading Dan Abronovs newletter about JavaScript. I like this about null vs undefined

    null and undefined: These are two special values. They’re special because there’s a lot of things you can’t do with them — they often cause errors. Usually, null represents that some value is missing intentionally, and undefined represents that a value is missing unintentionally. However, when to use either is left to the programmer. They exist because sometimes it’s better for an operation to fail than to proceed with a missing value.

Closure: Normally, when you exit a function, all its variables “disappear”. This is because nothing needs them anymore. But what if you declare a function inside a function? Then the inner function could still be called later, and read the variables of the outer function. In practice, this is very useful! But for this to work, the outer function’s variables need to “stick around” somewhere. So in this case, JavaScript takes care of “keeping the variables alive” instead of “forgetting” them as it would usually do. This is called a “closure”. While closures are often considered a misunderstood JavaScript aspect, you probably use them many times a day without realizing it!


  • return one record using prisma:
  where: { record: some.id }, // optional
  take: 1,
  orderBy: { createdAt: "desc" }, // update depending on if you need first or last item in the db


  • Working my through news letters from dan abranov:

Don’t treat “receiving props” as a special event. Avoid “syncing” props and state. In most cases, every value should either be fully controlled (through props), or fully uncontrolled (in local state). Avoid derived state when you can. And always be ready to render!

I like these principles:

  1. Don’t stop the data flow. Props and state can change, and components should handle those changes whenever they happen.
  2. Always be ready to render. A component shouldn’t break because it’s rendered more or less often.
  3. No component is a singleton. Even if a component is rendered just once, your design will improve if rendering twice doesn’t break it.
  4. Keep the local state isolated. Think about which state is local to a particular UI representation — and don’t hoist that state higher than necessary.


  • Knowing when to use useMemo useCallback in React is still confusing… feel like it’s getting spammed as a safety net. How do you definitivelt know that it’s needed or if there is a “better” way.


If your mental model is “dependencies let me specify when I want to re-trigger the effect”, this example might give you an existential crisis.

  • Yes this is exactly how I’m feeling right now

  • Sometimes it is important that your useEffect runs only once, so the empty useEffect is still valid approach (I think!)

  • The techniques to not lie about deps is very interesting.

  • useReducer is usually a good idea to keep logic and state clean when inside a useEffect it decouples Updates from Actions. useReduce is the cheatmode of hooks. avoid putting your reducer inside a component , but do so if you need to read props from the component.

  • If you only use some functions inside an effect, move them directly into that effect:


  • Started reading 7 habbits of highly effective people:

    • Be Proactive
    • Begin with the End in Mind
    • Put First Things First
    • Think Win-Win
    • Seek First to Understand, Then to be Understood
    • Synergize
    • Sharpen the Saw

Alt text


  • Learning lots from

    • prisma 2 schema auto completion rocks! but it auto completes with PascalCase watch out for that!

    • placing the prisma instance into the ApolloServer context is a good idea! it means that when we create our mutations and resolvers we have access to prisma via context e.g prisma.context.

    • The real magic is when when combine the prisma autogenerated types with GraphQL Code Generator

      • We can tell the codegen also use a custom interface which defines the context.
      • overwrite: true schema: ”http://localhost:4000” generates: src/generated/graphql.ts: plugins: - “typescript” - “typescript-resolvers” config: contextType: ../context#Context useIndexSignature: true
      • When then use the Resolvers types in our code like so, we have full prisma auto completion on the context argument.
      •  import { Resolvers } from "./generated/graphql";
           export const resolvers: Resolvers = {
             Query: {
               AllPosts: async (_, args, context) => {
             Mutation: {
               AddPost: async (_, args, context) => {},
               LikePost: async (_, args, context) => {},


  • when working with apollo, you dont need to be ‘smart’ about reusing state which you already have and try to pass it around.

    1. this cause things to not be reactive as you might expect
    2. too much local state is kinda hard to reason about after a while
  • Apollo has cache which it taps into which means a fetch may not necessarily go over the network. But from a dev standpoint we be quite liberal about continuously running refetches on the client.


  • The more I use GUI grid the more i seem to appreciate. Yes I prefer writing CSS the “old fashioned way” but as an abstraction over it, it works pretty darn well!


  • Started playing with Astro for the first time. Super excited about it. It’s something I’ve been thinking about alot, the islands architecture is something i need to read into a bit more, but if it’s what I think it is. It’s whats i’ve been dreaming about and struggling with re: webpack federation and micro-frontends. Just from a quick play around. I love how it puts HTML first and javascript last. It brings back the principle of least power. With SPA libs like vue-cli and CRA , it’s been diffcult to not bring everything and the kitche-sink when making a simple app. i think Astro is going to great lengths to let us have our cake and eat it!


  • I still db modelling confusing but prisma is helping a lot, this is the syntax i need to remember:

model Link {
  id          Int      @id @default(autoincrement())
  createdAt   DateTime @default(now())
  description String
  url         String
  postedBy    User?    @relation(fields: [postedById], references: [id])
  postedById  Int?

model User {
  id       Int    @id @default(autoincrement())
  name     String
  email    String @unique
  password String
  links    Link[]

Link has a relation to User . Link uses the postedById feild to reference an item in the User table via its id feild in the User table.

These are touch points when adding a new table and graphql api endpoint

  • update the prisma.schema
  • run npx prisma migrate dev to update the prisma db
  • run npx prisma generate to update the prisma client libs
  • update the schema.graphql file to include any new mutation definitions.
    • Add the signature of any new types.
    • Update any existing types which may rely on the new type
  • implement any new mutation resolvers
  • Implement any new type resolvers
  • Update any existing type resolvers which depend on the new type resolver
  • remember to export new mutation and resolvers and include in the ApolloServer instance


  • fragments lets us break down GQL queries intall smaller queries. like spreading an object in javascript.
fragment addressDetails on User {
  allUsers {

it’s possible to paramtize and add default args to queries:

type Query {
  allUsers(olderThan: Int = -1): [User!]!

use aliases when making multiples queires with different args

  first: User(id: "1") {
  second: User(id: "2") {

Union types work exaclty like TS:

type Adult {
  name: String!
  work: String!

type Child {
  name: String!
  school: String!

union Person = Adult | Child

Then we can use conditional fragments if the out of the union type returns different things:

  allPersons {
    name # works for `Adult` and `Child`
    ... on Child {
    ... on Adult {

Graphql security is a can of worms…


  • Final touches on the M4M website I’ve learnt so much from a simple single page site.
    • styled components
    • css masking effect
    • css backgrounds
    • GSAP
    • RoughNotation
    • MapboxGL
    • turf.js!


  • some wins with Prisma 2 today. The API is finally starting to make sense. Intuitively reached for a count and it was there! . Also the schema.prisma file makes a lot of sense when you have lots pre written tables written already. You can see how different relationships are created.



  • Spent a long time working on m4m landing page. React and Styled component really shining through as mental model to abstract different parts of the UI into intuitive components. For example I had 3 sections which were handling the backgrounds. Instead I was able to abstract all background markup into a single wrapper component, then all section content would be wrapped by this. So much cleaner! Doing this with vanilla html I could imagine the markup to get very unweildy.



  • Colleague share this article with we about testing.
    • Testing is so nuianced becasue every application is so different. We have to find a strategy that works for the application. Therefore it’s likely we’ll never find a one size fits all testing strategy.

      when anyone starts talking about various testing categories, dig deeper on what they mean by their words, as they probably don’t use them the same way as the last person you read did.

    • So are we all using testing terms like unit, integration and e2e in different contexts?


  • taking flexbox to the extreme with the m4m designs. flexbox is my hammer right now and I’m seeing all designs in flex boxes. Is this this good? or this bad? 🤷🏾


  • refactor m4m UI designs away from absoloute positioning. thinking css in relative units using something like flex makes layouts so much easier to reason with. The hardest thing is thinking about responsive design in mockups which dont factor it. I want to always incorporate responsive design into web work that I do.interpolating responsive design into my work is tricky but flex honestly makes it a lot easier!


  • Used jests test.each for the first time. The syntax is kinda confusing but I grokked it in the end.
      [1, 1, 2],
      [1, 2, 3],
      [2, 1, 3],
    ])(".add(%i, %i)", (a, b, expected) => {
      expect(a + b).toBe(expected);
    The array of arrays passed is the variables and expected params for your test. The last item in the array is the expected value . You can have any number of values before which represent the aruguments or conditions of your text.


  • Carried on reading some more of A Complete Guide to useEffect (man it’s a long blog!)
    • Thinking about react in effects helps you understand why we might get into unintended re-renders.
    • Every re-renders is an function execution, which folds in values in the current known state. e.g when you run a setTimeout with a variable, then change the variable before the setTimeout has executed, the setTimeout ‘remembers’ the old value still!
    • useState lets you persist state between refreshes/effects. Remember when you update a value with useState this cause a re-renders!
    • useRef lets you mutate values without casing a causing re-renders
    • deps in a useEffect is a way to let React know the exact dependancies the component needs to “watch” for changes to then refresh. But don’t ever lie about deps as a workaround to prevent a refresh. This can cause unintended consequences.


  • Refactored a bunch of m4m code to use styled components. I still have mixed feelings arouns css-in-js
    • conditional logic inside styled components feels overly complex
    • I dont like this syntax ${props => { props.thing // do stuff with props.thing }}
    • moving styled components out of the same .tsx files helps cleanup big files but feels like it removes the benefit of single file .tsx files.


  • Refactored m4m project to use styled components rather than css modules. Struggling to find a good pattern for SVGs and animations. Tinkering around with this pattern, but not really understanding it…


  • Finally used git rebase in a real project. Have a better understanding of the uses cases of when to use git rebase vs git merge
    • git merge takes everything from another branch and stick in front of all the commits in your branch. This can produce a “messy” git timeline
    • git rebase “re-writes” history by smartly merging your commits where they would have resided chronologically with another branch and slots in commits into the correct place. This can often help with conflicts, but not always.


  • Read through styled-components docs. Using a lot of JS-in-CSS at work so finally taking the time to understand how this library works…

    • Utalising prop within the styled components requires using string interpolation
      const Button = styled.button`
        /* Adapt the colors based on primary prop */
        background: ${props => props.primary ? "palevioletred" : "white"};
        color: ${props => props.primary ? "white" : "palevioletred"};
        font-size: 1em;
        margin: 1em;
        padding: 0.25em 1em;
        border: 2px solid palevioletred;
        border-radius: 3px;
          <Button primary>Primary</Button>
        color: ${props => props.primary ? "white" : "palevioletred"};
    • styled components opens up polymorphic css! extend an existing styled component with styled()
    const TomatoButton = styled(Button)`
      color: tomato;
      border-color: tomato;
    // Create the keyframes
    const rotate = keyframes`
      from {
        transform: rotate(0deg);
      to {
        transform: rotate(360deg);
    // Here we create a component that will rotate everything we pass in over two seconds
    const Rotate = styled.div`
      display: inline-block;
      animation: ${rotate} 2s linear infinite;
      padding: 2rem 1rem;
      font-size: 1.2rem;
    render(<Rotate>&lt; 💅🏾 &gt;</Rotate>);
    const Thing = styled.div`
      color: blue;
      .something {
        border: 1px solid; // an element labeled ".something" inside <Thing>
        display: block;
        <label htmlFor="foo-button" className="something">
          Mystery button
        <button id="foo-button">What do I do?</button>


  • use git cherry-pick <git-hash> to pick a specific commit id and move it into your branch. When cherry picking multiple commits it really important to do it in the correct order, the later git commits should be commited first!


  • Had a small eureka moment with React today. Declaring a plain ‘ol variable e.g let or const in a react component wont persist between re-renders. So if you initiate an instance of a class and bind it to a plain variable, the instance of the object will be lost if the component re-renders. To make sure the instance persists between re-renders we should use something like useState or useRef. What’s useful about useRef is that mutating it does not re-render. This is perfect for binding to something like a mapboxGL instance where the object is constantly changing.


  • Sounds obvious when writing this, but it’s easy to forget that the things I found obvious and easy are not necessary the same as others, regardless of their seniority.

assume zero knowledge but infinite intelligence


  • Massively geeked out with more React + Mapbox. throwing request animation frame into the mix to animate camera and create pulsing effects is surprising tricky especially when you have both TS and React mix. react needs a handle on the reqest animation frame instance otherwise it just has no idea how to re-render the screen without a full refresh. This blog was useful for my understanding https://css-tricks.com/using-requestanimationframe-with-react-hooks/


  • More react and MUI fun. Teasing out what the art of the possible with MUI react component. I’ve been able to bend it to my will so far, and only feels a little hacky. I still prefer coding with “low-level” css for the control, but I totally get we would give up that control to move quicker. The trade-off is not so clear when a MUI component does not do 100% what you want. Should you try and hack around MUI, or just eject out entirely?


  • Learnt about props.children in React. This is the equivalent of <slots/> in Vue.js. Anything within react components tags can be accessed in the accessed via props.children .


  • Using Mapbox with TS + React probably makes things way harder than it needs to be! This is because map state and react state needs to be in sync which means map and markers needs to go through useState and setup needs to be passed into useEffect. It all results in mapbox not behaving as you might expect.


  • Signed up to GCP to enagle route API so I could extract out a walking path from shoreditch to kathmundu. Coverted the route API output payload to Geojson using @mapbox/polyline . This then lets us plug it into the mapbox SDK easily.
  • Learning more about mapbox studio. adding in data via studio vs runtime is interesting. Doing it via studio probably has some performance benifits.


  • First full month at NB. Kinda enjoying working with React MUI. I like how easy it is to eject out of the standard styles with styled components.

    MUI is a toolkit, not a straight jacket.


  • Getting really comfortable writing react, coverting massive JSX into smaller components and abstracting logic into React hooks now. Next up, getting comfortable with the backend stack, Prisma 2 and GraphQL.


  • Thinking about re-renders in React is kind of a mind-f*ck. I’d like to revisit Vue.js again to see if prop changes cause uncessary re-renders like in React.


  • really the super powers of TS + React whilst breaking down a 3000 line component in smaller sub components.

    Resolve the red squiggles and save the day.


  • After learning the basics of yarn workspaces, I’ve gone down a rabbit-hole of learning about microfrontends. This is an advanced frontend architecture pattern that lets you stitch together different parts of your app using different frameworks. On the surface this sounds like an awful idea, the runtime overhead must be pretty bad if you’re using React, svelte and vue all in the same app. However, the buisness case for when a very large apps is trying to move away from a specific framework, or if you want to different teams to own different sections of an app. e.g checkout and product navigation in an large e-commerce site. For own very basic use case of this I want to make lots of different utilities using any FE framework I want, but also stitch it altogether in a single SPA, then host it on netlify.

  • I think this is possible when you combine the various technologies like yarn workspaces, webpack 5 federation and microfrontend frameworks like [single SPA](https://single-spa.js.org/docs/getting-started-overview.

  • this guy has got it working!

  • Learning webpack basics, something that i’ve been meaning to do! currently working through this tutorial

    • Fun fact about webpack is that it’s actually zero config out the box. As long as you have webpack-cli installed via npm, all your source files just need to be in a ./src directory, then running npx webpack will bundle all everything and output to a /.dist directory.
    • babel is used to alongside with webpack to transpile javascript. difference between compiling and transpiling?
      • compiling is to a difference output
      • transpiling is to the same language but either to newer or older versions of the language
    • Very basic webpack.config.jsfile
      • target all .js files
      • ignore anything in node_modules
      • also use babel to transpile, babel-loader will look at your .babelrc for how you want to transpile your js.
    module.exports = {
      rules: [
          test: /\.js$/,
          exclude:  /node_modules/,
          use: {
            // without additional settings this will reference .babelrc
            loader: 'babel-loader'
    • more webpack.config.js settings, mode: 'development'. bundles the js in a development mode which allows for source maps for easier debugging.
    • devtool: 'source-map' lets you see the og file which webpack bundled from and even set breakpoints.
    • entry: './src/index.js', is a way to provide a specific entry point, it defaults searching at ./src
    • Over ride the output location like so. Note path needs to be imported in with const path = require('path'). output: { filename: 'bundle.js', path: path.resolve(__dirname, 'public') }
    • devServer: {contentBase: './dist'} used along with webpack-dev-server means you can run webpack serve to have dev server with HMR support quite easily.
    • note webpack serve bundles and serves the assets in memory so you will not see the changes written to disk.


  • Learning the basics of yarn workspaces with Ben Awad (what a guy). Yarn workspaces facilitate having a mono-repo for your github repo. A mono-repo is basically just having multiple projects hosted in a single github repo. When you have a mono-repo setup with something like yarn workspaces it makes it easier to share dependencies and also share code between repos. The classic case is validation logic that may need to exist on both back and front end.


  • struggling with React MUI components and layouts. I want to be able to just eject and have the full control of CSS. When you have nested components where each components have their own css rules, it makes it very hard to understand what rules are potentially clashing. Is there a better way?


  • Playing with Material UI Grid API. the API is based of CSS flexbox so the mental model of flex containers and items translate across. There is an additional 12 column layout system which allows for some additional leverage to create some controlled layouts.


  • Working on a little side project which requires a masonary layout and smart sorting. Surprisingly challenging… Trying to convert this repo into TS friendly module as the first step. Next to try and understand the virtual grid concept.


  • Reading more blogs about Jest.
    • 3 ways to mock
    • jest.fn(), used to mock a function
    • jest.mock(), used to mock entire modules
    • jest.spyOn(), used to either watch a function and exposes extra meta data about how, when, times etc a function/method was called


  • Got some great advice from dev.to which has helped clear up my understanding of spies and mocks.

jest.mock() vs jest.spyOn() Looks like here you are using jest.mock() and jest.spyOn() here on the same function. Usually, these are used interchangeably, but not together.

Another good trick to properly type mock and mockResolvedValue. Simply cast to jest.Mock like so:

(axios.get as jest.Mock).mockResolvedValue(fakeResp);


  • Putting to what i’ve learnt about jest mocking to work and got unit tests passing with help from my work bud. Jest and TypeScript makes things slightly more difficult. Jest TS docs looks like a good resource which I have somehow completely overlooked.


  • Jest mocks and spies seems to make a bit more sense today.

    • Mocking is required for libs, to let Jest overide the functionality. With jest we often need to use jest.mock(). For example with axios we would do jest.mock('axios').
    • When TS is in play we need to assert some special Jest types to make it play nice like so:
    const mockedAxios = axios as jest.Mocked<typeof axios>

    This provides us a with mocked instance of axios where we can overide specfic functions of axios like .get and .post with mock jests mockResolvedValue a nice way of returning a fake resolved promise response.

    • when doing this, it’s proably not so important to assert the fake input and responses. With spies, we can watch when these axios properties were called and how many times. The syntax looks like this:
    const axiosSpy = spyOn(mockedAxios, "get"); //first args is the parent object, second is the property we want to watch
    // not sure what you want to do if you want to just watch the parent object 🤔
    • The spied instance lets us assert other useful things


  • Revised mocking, spies and unit testing with Jest today. Revisiting fundamental testing concepts via some great blogs by Kent C Dodds:

  • jest.fn(), lets you mock the implementation of method that might be expensive or flakey to run. Usually you write a mock implementation to return some fake data like jest.fn((x)=> x) a.k.a jest.fn(<any arbitary function>)

  • Kent D Dodds talks about ‘monkey patching’ using jest.fn() which term which means to overide the exsiting functionality, which is useful when we need to mock a library. We can simply overide the implementation with our mock.

    const originalGetWinner = utils.getWinner;
    utils.getWinner = jest.fn((p1, p2) => p2);
    • When using plain JS, utils.getWinner gets new properties adding in by jest such as .mock.instances, .mock.calls and mock.results which are using for a range of different asserations like what arguments were passed in, where the original instance from and what the actual result was.

    • TS will complain that mock does not exist on the property, to get around this we can use a Jest spy instead. which a way to watch when the function/method was called. it looks like this:

    • utils.getWinner = jest.fn((p1, p2) => p2);
      const spy = jest.spyOn(utils, "getWinner");
      const winner = thumbWar("alex", "jenny");
      spy.mock.calls.forEach((args) => {
        expect(args).toEqual(["alex", "jenny"]);

      it offers all the same functionality that is exposed on a .mock property.


  • Feeling slightly defeated today. Struggled with Prisma 1 and also writing unit tests.I need get more confident with writing mocks and work on my pair programming skills.


  • Starting to learn about Prisma 1 for some legacy parts of an work app. Interesting to see how both similar and different the the API is between 1 and 2 is. On the one hand, There is still the auto-generated API based on the SDL, but also the discovery of the API is much better when using the GQL playground. Worth noting that the Prisma playground is not bundled in Prisma 2…


  • Learnt about the difference between defining your graphql schemas using a code-first vs a SDL (schema-first) approach. I learnt the SDL approach first during Fullstack TS, React & Node. The run down of the industry learnings are well documented in this blog by prisma. It seems the industry initially agreed that an SDL approach first is good, but has now pivoted to code-first methodology. This is becasue you only need to reason with the implementation of your GQL resolvers in one language. Nexus is an example of code first GQL library.


  • Finished reading “So Good They Cant Ignore You”
    • Rule 1 - Dont follow your passion. Instead work on building rare and valuable skills a.k.a your career capital.
    • Rule 2 - Focus on having a craftmanship mindset. Keep honing your skills stretch yourself constantly.
    • Rule 3 - Control what you want to do, but only once you have built up enough career capital to demand it. Beware, because the moment you have built up enough career capital is the exact intersection where other try to take control away from you.
    • Rule 4 - Have a clear mission. Think about this deeply and constantly once you have applied all 3 rules above. Finding your mission is a challenge within itself. You can’t force it. It requires constantly scanning your feild for the potential next big thing. Once you have found it, making it your goal will bring clarity in everything you do. This is how remarkable people do remarkable things.


  • Further familiarisation with Nexus and Prisma. Getting a feel for the full workflow.
    1. update your schema.graphql file
    2. Run npx prisma migrate to create schemas in the db
    3. Run npx prisma generate to generate the types which will be used for graphql resolvers and also FE.
    4. Wire up the definitional for the gql server via Nexus. The nexusPrisma plugin is “aware” of the new schemas in prisma if setup correctly which mean defining the objectTypes is stright forward as t.model will be aware of what properties are available.
    5. Remember to add these definitions to the nexus makeSchema function.
    6. Queries and Resolvers can now be created using the nexus queryType function.


  • Got Prisma and Nexus all wired up, but getting pretty confused about the state of nexus-plugin-prisma. My understanding is that it was maintained by nexus but now deprecated to be replaced by nexus-prisma, which does not have parity with nexus-plugin-prisma, most fundamentally t.crud which automagically creates a crud api from your gql scheama.


  • More hands on leanring with prisma 2 today. Followed this tutorial to setup up a Next, TS, Prisma, GraphQL and have CRUD operations with Nexus.

    • Next already has an API instance that comes with it so dont need to setup node seperately
    • Prisma doesnt come with shipped with graphql anymore, but we can easily setup graphql with Apollo Server by creating a graphql enpoint in the pages/api directory, then add the following boilerplate to a file called graphql.ts
    import { ApolloServer } from "apollo-server-micro";
    import { schema } from "src/schema";
    import { createContext } from "src/context";
    const server = new ApolloServer({
      context: createContext,
      tracing: process.env.NODE_ENV === "development",
    const handler = server.createHandler({ path: "/api/graphql" });
    export const config = {
      api: {
        bodyParser: false,
    export default handler;

    context is factory function which returns an instance of Prisma:

    import { PrismaClient } from "@prisma/client";
    const prisma = new PrismaClient();
    export interface context {
      prisma: PrismaClient;
    export function createContext() {
      return {
    • As Prisma 2 does not come with grapql server anymore and is more a library. This means that it does not come shipped with graphql CRUD operation out of the box. Nexus is a new library which can create these CRUD operations based on an existing schema. It has the ability to create queries and mutations and for a single or mutliple elements.


  • Learning about the basics of Prisma 2. The slightly confusing thing about Prisma 2 is that is many things, it’s worth understanding that it is totally different product to Prisma 1. Not drilled into the difference between the two product but some useful information here.

  • What I understand about Prisma 2 so far.

    • It currently supports PostgreSQL, My SQL and SQL Lite. MongoDB support coming soon.
    • DB schemas are defined using a schema.prsisma file which uses graphql-like syntax.
    • In here the db source is defined with datasource
    • The rest of the db schemas can also be defined in their own model blocks. There are lots of additional attributes/directives/modifers that can be used, so that you have more control over the feild types and creating relationships with other tables.
    • Once schemas are defined you need to run prisma migrate which will create real db schemas based on what was provided in the prisma.schema file. It will handle db migrations too.
    • When working with Prisma in node, you will be using the @prisma/client. Under the hood , the client knows to read the prisma.schema and generate a client that is tailored to your models.
    • The prisma client can thought of as a query model builder.


  • Learning more about the nuiances of Cypress. When creating an application it ideally should built with e2e testing in mind. This means adding testing tags like test-id="InputForm" to elements that needs to be targeted by the Cypress API. Sometimes when component APIs are not using sematic HTML, it can make testing quite difficult. For example some drop down lists in material ui are a div with li elements in them. When there are no test tags that can be targeted, we may need to something “hacky” and target id or aria-ids instead, then search for (sibling)[https://docs.cypress.io/api/commands/siblings#Syntax] elements instead.


  • Learning about Cypress for e2e and intergration testing. It’s differnt from something like Jest for unit testing and much more similar to something like Selenium. the API lets you pick out specific DOM elements, then focus, click or perform other user actions.
  • The mindset of writing cypress test is different from writing traditional unit test and applying arrange, act , assert. Instead it’s more like arrange, act , assert, act assert , act assert. You are instead writing tests that reflect the “user journey”.

example cypress test

context('Example', () => {
  it('example cypress test', () => {




  • Today has been about learning about working with forms in React with Formik in the deep end. Took me a while to grok the API but I think I finally get it now.

    • The form state is can be initialised with an object of any shape in the initialValues prop.
           initialValues={{ name: 'jared', items: ['happy', 'sad', 'mad'] }}
           onSubmit={(values, actions) => {
             setTimeout(() => {
               alert(JSON.stringify(values, null, 2));
             }, 1000);
        {// ...components }
    • Accessing individual parts of the initialValues object can be acheived with the <Field /> component
    <Formik initialValues={{ name: "jared", items: ["happy", "sad", "mad"] }}>
      <Field as="select" name="color">
        <option value="red">Red</option>
        <option value="green">Green</option>
        <option value="blue">Blue</option>
    • I think programmatic mutation can also be done by utalising the render prop to render more jsx
     initialValues={{ name: 'jared', items: ['happy', 'sad', 'mad'] }}
      <Field name="color" render={()=>(
        <Button onClick={(e)=>{
          e.value = 'pink';


  • Just read a little bit of so good they cant ignore you last night. Giles Bowkett is famous Ruby developer in the community. He had a clear mission to combine coding career with music which propelled his sucess. Having a specfic mission allows for concentrated effort and clear direction of what needs to be done next.


  • Starting to learn the basics of RxJS and observables, a useful tool for working with async processes using a new primitive called oberservables.
  • in RxJS you set up a one time process called an epic which represents “pipes” of your async code. All async data flows through these pipes and our pipes can run actions to based on this data, it works well with asycn processes in redux. RxJS also comes with a bunch of ready to go methods to managing data, such as map, reduce and debounce so it can also be considered a utility lib like lodash but for async code.
  • the oberserbable primitive is not too dissimilar to a Promise. A Promise can resolve or reject. A Observable has three arguments.
    • A callback when the data stream was successful
    • A callback when the data stream was unsuccessful
    • A callback when the data stream has completed or ended
    // example oberservable
      (value) => console.log("next", value),
      (err) => console.error("error", err),
      () => console.info("complete!")


  • Revisiting my blog, trying to get netlify redirects/rewrites to play nicely with gridsome is proving tricky. I want urls to resolve to custom endpoints so it can resolve my side projects. This should be a good way to track analytics without needing to have new multiple tracking accounts…


  • Second full week complete at NB. Navigating around the new code base a little more confidently now. Feeling pretty comfortable with React. The hard part are the pattern outside of React. I still unsure how graphQL is integrated into the projected, I’m fascinated by the fact that types are auto generated from the queries that generated in the code base. Other complicated part re when third part libs are integrated into the project and wrapper classes are hand rolled. Little mini universes of complexities are created everywhere. These patterns and paradigms take time to master


  • Learning about Konva.js for canvas shape manipulation.
  • Squashing bugs around application focus management. Locking in focus for modal can be tricky. discovered focus-lock to help handle this.


  • More react bug squashing at work. Resolving issues around react router where we are trying to intelligently go back to previous page with smart info. The problem is that we are mimicing functionality that you get for free in browsers. These kind of bugs make you think if frontend frameworks are trying to do too much? When should you say no and steer people to use the in built functionality of browsers?
  • reading more of so good that cant ignore you, defining a good mission in what you do is incredibly important. However getting to work on meaningful mission is the end goal of having built up enough career capital.


  • Second week at NB and getting my teeth into more TS/React. Typescript still makes my eyes go crossed, I’m hoping things get easier I familiarise myself with the codebase!!

  • Reading more of So Good They Can’t Ignore You. Summary of the possible control traps:

    • It’s dangerous to try and gain more control of your life before your have enough career capital. You simply wont be able to sustain your lifestyle.
    • Once you have gained enough career captial to bid for more control. This is the exact intersection when people/employers fight to keep you to be more on a traditional track.
    • The law of financial viability means is a way to navigate if you are indeed gaining career capital. Ultimately it boils down to, “are people willing to pay for what you do”


  • Worked on my dev.to gridsome plugin. Learnt a good technique to iteratively resolve an array of promises one at a time. Sort of deafeating the purpose of promises in some ways.. but really useful if you need to queue up a bunch of requests to an API without hitting a 423 error like what was happening with the dev.to API.

Some really awful solutions.

But this is the best one i’ve found.

async function printFiles() {
  const files = await getFilePaths();

  for await (const contents of files.map((file) => fs.readFile(file, "utf8"))) {



  • Weekends are for side projects now 😎. Worked on Typescript-ify musical OSM. I read through the OSM service logic adding comments and converted it all to TS. Learnt an interesting technique setting off delayed functions to run using setTimeout. In the context of the OSM service, it checks the time of the OSM changeset, gives it a small offset relative to the current time, then fires it into a setTimeout. The effect is like the data is being streamed in. The technique could also be used to throttle the amount of requests being made to an API.


  • Survived the first week at my new job! Feels good to be coding full time and I’m starting to get to grips with React a lot more. Coded my first React hook today and working through some tricky routing logic, but it’s all coming together! Lots of things I need to learn and continue to improve on.
    • Writing idomatic React, understand the rules of hooks. Various Nauiaces of React.
    • a11y and the WCAG spec.
    • Various react libs, Snack-provider, Helmet, type-routes.


  • Getting a feel for what a large scale well typed React codebase is like at work. It’s glorious. Code discoverabilty is amazing and feel like I can be productive even after just a few days of looking at it.


  • Learning about some React component organisation patterns:

    • Create “presentational” components which simply wrap the views for a specfic data loading state, eg. loading , error, success. There should be no styling in here. Each one of these presentational components are exported out.
    • Create “container” components, which re-import the presentation components above. The purpose of this container component is any further advanced logic and data fetching. Create “frames” for each of the presentational views above and conditionally render these frames based on logic of the component, this will most likely be data fetching related. There should no styling in here.
    • The container component above can then be imported into view component where styling can be applied. There should little to no buisness logic here.
  • Read some more of “So good they cant ignore you”.

    • Biggest career most people fall into is trying to gain autonomy of their lifestyle before they have aquired enough career capital. This mean your lifestyle is unsustainable.
    • Once you gain enough career capital, you can demand for more autonomy and time off becasue your skills are so valuable, people have no other choice but to meet your demands.
    • At the same time you gain enough career capital is the same time people will trying to gain more autonomy over you.


  • First day at NB today. Learning all about the tech stack used in the project I’m working on. React Node, GraphQL, TypeScript, Prisma.
  • Need to add Prisma to my learning list now. Looks like an interesting technology.
  • Learning how the project is organised into a mono repo. I like how it’s possible to run each layer of the application locally, and how they have configured the dev pipeline so that you can point to a hosted “staging” version of the app.


  • Cant sleep, reading a few pages of so good they can’t ignore you.

    • People ultimately enjoy control and autonomy. Career capital help you work on what is meaninful to you. From there you can invest that career capital into traits that define great work. One of which should be aquiring control.
  • Started work on converting Musical OSM into TypeScript


  • Read a little bit of So Good They Can’t Ignore You.
    • Your rare and valuable skills is your career capital.
    • Aquire as much of this as much as you can with a craftsman mindset.
    • The craftsman mindset means you focus on what you can offer to world rather than what the world can offer you.
    • To truly become “So Good They Can’t Ignore You” requires deliberate practice and stretching yourself. Athletes and muscians do this to become elites. It is less practiced by knowledge workers. Therefore if you apply this it is possible to vault past your peers in the acquisition of career capital.


  • Wrapped up on the last project of Full stack React, TS, Node & GraphQL. Got everything I wanted from it in terms of learning about about to setup GraphQL from scratch. was a good bonus to also learn about Redux and TypeORM and has highlighted some gaps in my knowledged around data modelling.

  • Hacked about on something special. Massively inspired by Trams in Helsinki. I remember being in total awe seeing that visualisation for the first time. I always said I would create something similar once I got good enough at coding. I feel like a sense of coming full circle after creating this and just in time before starting my new job a full time JavaScript Developer.


  • Learning making client side GQL mutations using the apollo useMutation hook. First argument is a string of the GQL query. It also another options object of which one of the properties is refetchQueries. here another GQL query can provided which which be executed after the mutation has completed. This is useful as some mutation may not return any data so a follow on query is often required.


  • Revisiting how Redis can be used in an app that requires authentication.
    • Redis can replace the need to store the “user” state in local app such as localstorage. Redis acts as mini data store to quickly retreive the state of the user.
    • The redis app stay in sync with the client by the user of Cookies. The cookie can be configured to httpOnly so it cant be tampered with JavaScript.
    • If the client responds without a cooke, an invalid one or one that has timed out, the user is effectively “logged out”.

  • Learning how to use apollo client in a React code base

    • First we import ApolloClient, InMemoryCache and ApolloProvider from "@apollo/client"; in our root react component a.k.a index.tsx
    • Setup the ApolloClient instance like so:
    const client = new ApolloClient({
      uri: "http://localhost:9090/graphql",
      credentials: "include",
      cache: new InMemoryCache({
        resultCaching: false,

    Esure the backend has cors enabled and it aware of what domain and port the react client is running under.

    • Finally we can wrap main React <App/> in the <ApolloProvider/> and provider it a client prop which is is the client object created above.

    • Now we can make queries to the GQL server in any component like so:

      • import React, { useEffect, useState } from "react";
        import { gql, useQuery } from "@apollo/client";
        const myGQLQuery = gql`
          query GetAllCategories {
            getAllCategories {
        const MyComponent = () => {
          const { loading, error, data } = useQuery(myGQLQuery);
          return (
            <div className="leftmenu">
              {loading ? (
                <span> loading </span>
              ) : error ? (
                <span> Error </span>
              ) : (
                <span> data </span>
        export default MyComponent;
    • Learning about useLazyQuery hook in apollo client. Which lets you execute graphql quries on demand. Also useful when used in tandem with graphql variables, when using making the call the graphql query in javascript you can supply a parameter to the graphql argument by supplying a an object which has a variables object nested within it.


  • Starting to understand some nice coding architecture patterns when using Type ORM with Apollo from reading Full-Stack-React-TypeScript-and-Node.
    • Table schemas are all written in the TypeORM syntax in the repo folder. Note these also act as the types used by TypeScript too!
    • Specic queries that need to be made to each table can ben written in corresponding file in the format <TableName>Repo, there can be any arbitary logic in the file in this file and TypeORM provide really every API required to to query, filter and join data which can be made use off. The functions can be exported then used in GraphQL resolvers or even in just a regular REST API.
    • Nect we define our GrapQL scheama, queryies and mutations in a typeDefs file which acts as the contract for implmentation in a resolvers file.
    • Implementation of the typeDefs can now be carried out in the resolvers file. Rise and repeat for every CRUD operations required for your app.


  • Completed creating a set of REST API endpoints as part of the Super Forum project in Full-Stack-React-TypeScript-and-Node. Returning data back from an PostgreSQL db using TypeORM all makes sense. Now back in GraphQL land and adding Apollo as an Express middleware to expose a graphql endpoint. The same old boilerplate logic as previously learnt about. Once setup, it’s a quite a repetitive pattern of first defining your typeDef, i,e defining the shape of all the data schemas up front, the types of mutations and the types of queries which can be performed. Once all defined its writing the “real” logic in the resolvers.
  • Learning __resolveType. If a query of mutation resolves to a union type, it’s important that the type is also implemented using __resolveType. Further reading


  • Read a little bit of “So Good They Can’t Ignore You”.
    • “Define Good”. At what point do you quantify your skills as “good”. Try and find something that is measureable. From a development perspective, perhaps it could be I can build “x” kind of apps.
    • “Stretch & Destroy”. Continually stretch you abilties and put yourself in the uncomfortable zone, push past your comfortable zone is the only way to accelerate your skills forward.
    • “Be Patient”. Realise that what you do today is setting laying down the seeds which you can only reap in 20, 30 or 40 years time.


  • More backend work for the SuperForum project as part of Full-Stack-React-TypeScript-and-Node. Using decorators with TypoORM is quite a struggle, not fully grasping the syntax entirely but getting things to work somehow. Database modelling is still a bit of mind warp for me, definitely a weaker area in my skill set and something I should try to improve upon, creating table realtionships is still especially hard for me to get my head around for some reason. The API for querying your data in TypeORM feels much more comfortable. Having nice methods like findOne and create feels like coding with Firebase or Mongo again.

  • Learning how the architecture for authentication looks like if you were to implement it from scratch:

    • Your User table can hold the record of the user and include fields such as Confirmed to indicate if they user account has been verified.
    • The state of if the user is logged can be managed by Redis. By making use of cookies, this can track which specfic user is logged in or out. This means that whenever the user accesses the app, there needs to be quick handshake with the redis server to check if they are logged in or not. Becasue Redis is fast, this means no query to the DB is required.


  • Crash course on using Redis for managing session state in an app and also TypeORM for managing db schemas in PostgreSQL. Suddenly a lot of new technologies to get my head around. A few things I’m pondering… how much of redis can be offload to browsers localstorage instead for state? This textbook has a big emphasis on server side state management, which they claim provides you more control over the users environment. What kind of apps need that level of control?

  • First time dabling with an ORM and i’ve always understood their purpose and can definitly see their benifits. Like with any abstractions, it’s always good to learn the the lower level technology when possible. ORMs make me slightly uncomfortable as I’m giving up the how the SQL is being written. I think these abstractions only make sense when you have multiple developers and there needs to be some sort of conformality …

    • does the level of complexity introduced to app grow accordingly to number of developers that are working on it? As more tools are created to seemingly makes our lives easier, they eventually become industry standards which everyone else has to understand. There is something to be said about solo devs like peiter levels who can build apps extremely quickly using very simple technologies…


  • Learning some interesting React.js design patterns:

    Factory Components🔗

    This is a pattern whereby you create a component which sole purpose is to render more components. There are some clear benifits of creating a factory component which I can think of: - The factory component can contain the logic of how/when a certain is to be rendered. For example if the factory is provided an array of objects as a prop, it could conditionaly on a compoents bases on specfic properties on each object. - It creates a seperation of concern for the logic of how conditionally rendering a component and also the generic component that is being rendered. - When we abstract logic out into a seperate component like this, it makes it easier to optimise the component by wrapping it all in a React.memo.

    Further reading

    Passing dispatch down as prop🔗

    This a bit of mind warp, but once you have setup a useReducer in a component it’s possible to create child componets that accept the dispatch function as a prop. This allows for the child components to mutate the state of the parent component. The tricky thing is knowing what context of the dispatch is in play when using it in the child components.


  • Working through more of the SuperForum project as part of Full-Stack-React-TypeScript-and-Node. Mixed thoughts on the way how this “real world” app is being taught. It’s effectively copy and pasting code from github, forcing myself to read and parse all the code is quite slow and time consuming but it’s better than just mindlessly copy and pasting the code which what teach styles seems to be advocating. Overall getting the hang of reading useEffect and starting to see how it’s commonly used along with the useState hook in an API data fetching manner.


  • More work on the SuperForum project as part of Full-Stack-React-TypeScript-and-Node. Refactoring React components is surprisingly easy and the VS code support for TSX is really amazing I can understand why people like the DX so much. I have notived that complexity can stack up pretty quickly in a component. useEffect is used everywhere and clearly an important hook, but it’s difficult to parse in my head, perhaps it just requires more practice. I naturally want to compare it to Vue which has a much user friendly API. It’s no wonder why front-end frameworks are such a common subject for debate. They are both just tools at the end of the day.Debating about which tool is better made does not matter if you dont do anything meaningful with the tool.


  • Worked on the SuperForum project as part of Full-Stack-React-TypeScript-and-Node textbook. Making use of the useReducer hook heavily for the the Register and Login components. The userReducer excels at letting you micromanage specfic properties in an object. As react does not let you mutate things directly, we can only mutate state via useReducer using the dispatch function which is made available when you deconstruct the useReducer function.

When we deconstruct userReducer we have get access to an array which contains two things, the first item is the state object, the second item is the dispatch function used for mutating the state.

The useReducer function itself takes two arguments. A reducer function (a function with a switch-case to action something dependant of the type action being called), the second argument is the inital state object. Bascially what first instance of you state should look like.

When using useReducer it’s common to also deconstruct the function call in the return array and also then deconstruct the state object within that array like so:

const [{ propA, propB }, dispatch] = useReducer(aReducerFunction, {
  propA: "",
  propB: "",

This is probably why userReducer is kind of hard to understand at first glance. This even before we have talked about the reducer function or in this example the aReducerFunction.

The body of aReducerFunction could look something like this:

export const aReducerFunction = (state: any, action: any) => {
  switch (action.type) {
    case "updatePropA":
      return { ...state, propA: action.payload };
    case "updatePropB":
      return { ...state, propB: action.payload };
      return { ...state, resultMsg: "Not a valid action" };

In this example, if a valid action has been provided i.e “updatePropA” or “updatePropB”, we spread the orignal state back into an object and ensure that the property for the relevant propty is updated. spreading the whole original state into a new object and returning it is very much “React” pattern which follows the function programming paradigm of not mutating objects directly. This is our way of updating a reactive object in React.


  • Learning how to create a Error Boundary component for catching React errors gracefully. Currently can only be done use React class syntax as it relies on the componentDidCatch lifecycle event which is not available as a react hook yet. When you wrap the main <App/> component in this error boundary you can catch any react specfic errors from all children components and render whatever we want. Note the Error Boundaries can’t catch errors in server-rendered react, errors in envent handlers or in asyncronous code. These are usually managed independantly using try, catch.


  • Setting up another custom graphql express app from scratch. It’s possible to add custom middleware using graphql-middleware. This lets us write functions that can called before every resolver. Example boilerplate for what custom graphql middleware looks like:
export const log = async (
  resolver: any,
  parent: any,
  args: any,
  context: any,
  info: any
) => {
  if (!parent) {
    console.log("Start Logging");
  const result = await resolver(parent, args, context, info);
  console.log("finished call to resolver");
  return result;

  • Learning how to write unit test for graphql query resolvers

    1. write a testGraphQLQuery wrapper function, takes the schema, source and variableValues :
    import { graphql, GraphQLSchema } from "graphql";
    import { Maybe } from "graphql/jsutils/Maybe";
    interface Options {
      schema: GraphQLSchema;
      source: string;
      variableValues?: Maybe<{ [key: string]: any }>;
    export const testGraphQLQuery = async ({
    }: Options) => {
      return graphql({
    1. This can then be use in a jest test suite. Using the makeExecutableSchema function we can pass in our real type definitions and resolvers as the schema parameter.
    2. we can pass in a real stringified Graphql query for the source parameter
    3. we can pass in an arbitary argument the graphql query accepts as the variableValues parameter. Example code:
    import typeDefs from "./typeDefs";
    import resolvers from "./resolvers";
    import { makeExecutableSchema } from "graphql-tools";
    import faker from "faker";
    import { testGraphQLQuery } from "./testGraphQLquery";
    import { addMockFunctionsToSchema } from "apollo-server-express";
    describe("test getting a user", () => {
      const GetUser = `
        query GetUser($id: ID!){
          getUser(id: $id){
      it("get the desired user", async () => {
        const schema = makeExecutableSchema({ typeDefs, resolvers });
        const userId = faker.random.alphaNumeric(20);
        const username = faker.internet.userName();
        const email = faker.internet.email();
        const mocks = {
          User: () => ({
            id: userId,
        addMockFunctionsToSchema({ schema, mocks });
        const queryResponse = await testGraphQLQuery({
          source: GetUser,
          variableValues: { id: faker.random.alphaNumeric(20) },
        const result = queryResponse.data ? queryResponse.data.getUser : null;
          id: userId,


  • Learning how to write my own graphql mutation resolvers. Not too different from writing a query resolver.

    1. make sure you have updated your type definitation a.k.a schema to define what attributes can be written to the mutation
    2. Mutation logic is nested in the Mutation resolver object.
    3. Write your function which takes in the folloing args parent, args, ctx, info. When working with TS we can further annoate the args to see what is available. These are the argemenents which will be provided by the consumer of the graphql mutation.
    4. Write the logic which can be any arbitary code, usually will be logic that writes to the db.
  • GraphQL subscriptions as per the name gives your a real-time subscription to a graphql scheama to notify any changes. Apollo also lets you wite your subscriptions and requires quite a bit more boilerplate code. It requires the use of the createServer function from the node http module. The http server serves the express app. This instance of the httpserver is then provided to the instance of apollo server so it is aware of the subscriptions. Once all set up. Writing our subscription logic follows the same pattern as GQL queries and mutations.

    1. Update The GQL type definitions to define what attributes the subscriptions should expose
    2. Write the resolver for the subscription. For any mutations that subscriber is dependant on, we need to make use of the pubsub method which can be destructured from the ctx object. This lets you “publish” or “notify” GQL whenever this mutation has occured
    3. A method also needs to be written in the resolver Subscription object. e.g:
      Subscription: {
        newTodo: {
          subscribe: (parent, args:null, { pubsub }: GqlContext) => pubsub.asyncIterator(NEW_TODO)


  • Practiced creating a small GraphQL server using Node Express and Apollo. Lots boilerplate code but in summary:
    • Make use of ApolloServer and makeExecutableSchema from apollo-server-express . This lets you use express as the middleware for the Apollo server.
    • Next we can define our resolvers which can perform any arbitary action. the resolver must be supplied back to the to the makeExecutableSchema
    • launching the Apollo node app is a graphql explorer app. very useful for exploring your data.


  • Craftmanship beats Passion every time (according to so good they cant ignore you). Becoming “so good they can’t ignore” can only happen when you have sunken enough time into the thing you you want be good at. This means whatever we do, we should optimise our time to build upon this skill. This is referred to as “Career Capital”. If your job does not optimise for you to grow your career capital, it’s probably time to leave.
  • Sometimes it’s difficult to determine hard parts of your job is the “grind” to getting better or if it is genuinely just a bad job. a.k.a not optimising your career capital. Things to look out for:
    • Your job makes you work people you dont like
    • Your job has negative impact on the world (in your opinion)
    • Your job provides you little oppurtunity to optimise your skills that make you rare and valuable.


  • Learning the features of Express.js. When creating a web server using node.js reading and working with data, response headers can get messy very quickly. Express makes this easier by abstracting these processes away with “middleware”. Express Middleware can do an arbitary action, using this pattern there many other pre-created and third-party middleware which we can make use of. Some frequenlty used ones is express.Router() which let your programme actions depending on the URI. Another is bodyParser which let your tap into the req.body.message property to access the payload without needing to work with data streams. There are many more express middleware available!


  • Learnt about React Router for client side routing using the <Switch> and <Route> components.

    • the <Route> can take a path prop which will dictate what component to show depending on the url path

    • to show a component at spefic path use the component prop which takes a React component as it’s input

      • if you want to provide a prop along with it too, you need wrap the component in a function like so:
      function App() {
        const renderComponentWithProp = (props: any) => {
          console.log("screenC props", props);
          return <ScreenC {...props} message="This is some data" />;
        return (
          <div className="App">
              <Route path="/c/:userid" component={renderComponentWithProp} />
    • There are a bunch of additonal props availabe on your component when used withe the <Route> component such as history, location and match. This let you have programatic access to the react-router api. Alternatively you can tap into the react-router api using new hooks such as useHistor and useParams.

  • Revising the fundamental of creating an API with node.js using the http module.

    • Status codes
      • 2xx - success
      • 4xx - user/request/browser error
      • 5xx - server error
    • Headers, metadata of the response
      • User-Agent - what browser & OS is sending the response
      • Referrer - the URL was on before linking to the current URL
      • Cookie - text files which container more info about the user and session specific to the current website. Server can add almost anything into a cookie file. Usually has a session identifier and token.
      • Content-Type - the type of data which is in the body of the request e.g application/json
      • Access-Control-Allow-Origin - Used with CORS to allow a different URL to make requests to the server. * means any URL is allowed
      • Allow - indicates which HTTP verbs are supported.
    • When handling data in POST requests. The data streamed in. Therefore we need to make use of the data and end events triggers to know when data has completed streaming in an async pattern, to then perform the required actions on the data. the patten is for the stream to be captured in an array which is then but into a “memory buffer” then finally we can consume it as for example a JSON object.


  • Learning about Redux vs Context API for state management. Fullstack React , TS & Node text book recommends using Redux for larger application. You can get away with using context for smaller less complicated app. The main pitfalls of Context is that it will cause children components to re-render even if it is wrapped in React.memo. This because when create a new context this is also a new React component which does not use memo therefore changes to this parent causes a knock on effect to all children. There is no fix for this! 😢. Good Blog which expands upon this a bit more - https://leewarrick.com/blog/the-problem-with-context/


  • Learning basics of Redux. Setting up Reducers which remind me of actions in Vuex. The reducer is a way to filter down what kind of action should be done depending on the action type which can be provided. The action type is simply a string.

    • We can combine multiple reducer into a centralised redux store using createStore and combineRedicers from the redux package. Basic boilerplate for this looks like:
    // ./src/store/conigureStore.ts
    import { config } from "process";
    import { createStore } from "redux";
    import { rootReducer } from "./AppState";
    const configureStore = () => {
      return createStore(rootReducer, {});
    export default configureStore;
    // ./src/store/AppState.ts
    import { combineReducers } from "redux";
    import { UserReducer } from "./UserReducer";
    export const rootReducer = combineReducers({
      user: UserReducer,
    export type AppState = ReturnType<typeof rootReducer>;
    // ./src/index.tsx
    import React from "react";
    import ReactDOM from "react-dom";
    import App from "./App";
    import { Provider } from "react-redux";
    import configureStore from "./store/configureStore";
        <Provider store={configureStore()}>
          <App />
    • Like in Vuex, once the redux store is created, data from the store can be consumed directly without needing to pass in props to the chile components. To do this we makes use of the useSelector hook to tap into redux state, e.g :
    const user = useSelector((state: AppState) => state.user);
  • Read some more of “So good they cant ignore you” . Passion vs Craftsman mentality. Craftsman mind set means you can focus on what you can offer to the world. Some argue that you need passion to learn enable to have the desire to hone your skills further, but often the case is that when you get good at something you start to be come passioniate about it.

    • Good projects comes to those who have good skills.


  • Skimmed over notes on how to write integration tests in React with Jest and Testing library. Testing library allows for your to easily target dom element and assert of text, elements or other components are appearing correctly. Jest provides capbilites to do snapshot testing and also mock functions and components to avoid running timely network calls. This is usually discouraged as we are diverging away from “real” behaviour, but sometimes can be useful.

  • Starting to learn about why you may want to consider using Redux for larger scale enterprise React apps. Redux may be useful in situation such as Auth where every component needs to be aware of this data. It’s unreasonable for every component to keep it’s own state for this which is why Redux can resolve this issue by being a single source of truth.


  • Read more Full-stack React TS & Node. A JS tooling history lesson and learning about the the basics of unit testing with Jest and testing-library.


  • Learning more about how the useCallback react hook works in practice. The first argument is a callback function, the second argument is an array of props or state which the the hook is “watching”, it keeps the current values of this data in memory. Only if the value has changes will the callback function be executed.
  • Wrap your functions in React.memo to ensure that the component only re-renders if the props have changed. This provide a small perfomrance benefit.


  • More reading on Full-stack React , TS & Node. React hooks.
    • useState, replaces state and setState in class components. used to update single values rather than objects. For complex objects useReducer might be better.
    • useEffect, similar to componentDidMount and componentDidUpdate in class components. However, they are run before drawingon the screen happens. It takes a second parameter to watch a prop or state for changes. You use this hook this multiple times. Passing an empty array to the second parameter to this hook forces this to run only once.
    • useCallback, takes an instance of function for first argument, the second argument is an array of items that might change. This exists to save memory.
    • useMemo , similar to useCallback. it can cache the results of a long running task, will only re-run in the the provided props or state that it subscibed to has changed, these are passed in as a second array argument.
    • useReducer, this similar to react redux. takes two parmeters reducer and initial state. It returns a state object and and disapatcher. The reducer filters what to do based on the dispatcher.
    • useContext, this allows for global state which can injected into any child regardless of hierarchy. Alternative approach is to use React Redux.
    • useRef, used to access the instance of element and opt out of reacts model. this does not trigger a re-render if the value changes.


  • Read some more Full-stack React , TS & Node. Learning about React lifecycle methods in class components

    • When a component is mounting we have access to the follow methods:

      • constructor the class contructor used for initalising state
      • getDerivedStateFromProps , used for basing state on prop from a parent component. Use carefully as can cause re-renders
      • render used to render out JSX
      • componentDidMount, happens after a component has initialised . A good place for API calls.
      • UNSAFE_componentWillMount, as the methods implies, legacy method. Avoid!
    • when a component is updating, we have access to the following methods:

      • shouldComponentUpdate, used to decide if a re-redner should happen or not.
      • getSnapshotBeforeUpdate, capture the state of the dom before a render happens. Usually used alongside componentDidUpdate
      • componentDidUpdate, run immediately after a re-render completes. Can make additional changes to the dom here, or after state. Important to have an exit condition so you dont create an infinite loop.
    • when a component is unmounting, we have access to the following methods:

      • componentWillUnmount, used for cleanup work like removing event listeners or subscriptions
    • Tap into these lifecyles to help control re-renders. If re-renders get out of controls, the UX will suffer.

    • React team recommendations:

      • componentDidUpdate is useful for triggering behaviour based on a prop change.
      • React.memo can control re-renders only when a prop has changed instead of when the parent changes.
      • Make your component fully controlled, so it has no state of it’s own. this usually means drilling a prop down many components deeps which can be annoying.
      • use componentDidMount for rendering state based on an API call. Note componentDidMount only ever runs just once.
      • ComponentDidUpdateis useful for managing state based on prop changes. But generally try to avoid using derived state. Try to just use props directly and have state managed from a parent component.


  • Read some more of “So Good They Cant Ignore You”. Following your passion is dangerous advice which can lead to job hopping, looking for the thing you’re interested in. We romanticise the idea of following your passion by looking at elite atheles and famous icons. In reality you must learn to love what you do. One way to do is with a craftsmanship mindset. When we measure our success and can see we are improving at something we start to enjoy it more.

  • Read some more Full-stack React , TS & Node. Revised react basics with create-react-app.


  • More Full-stack React TS & Node. Revising ES6 features
    • revising bind, call
    • bind is used to replace the instance instance of this. if a function call that is reliant on this it will have been altered to the new context
    • call is used at the the function is being called.
    • apply is similar to call but the second argument you provide is an array of argument which will be supplied to for the method that is being called.
    • revised how reduce works
    • use sets for unique lists, use maps for unique object collections.


  • More Fullstack React, TS & Node

    • Abstract classes used when you just want to specify the signation of method in a class. This allows for other classes which extend off it to implment their own versions of the methods, this is also known as overriding.

    • Interfaces are another way to provide to write a contract for your code. They contain no implemtation details.

    • Generics allow for your type definitions to include an associated type chosen by the user. Usually used when the exact type is unknown but will be used in various places like an argument.

    • optional chaining is useful for handling unknown object properties so that they are returned as null or undefined

    • nullish coalescing is a new shortcut for the ternary operator

      val1 ?? val2; // return val1 if it is not null or undefined, otherwise return val2

      nullish coalescing is checking specfically for null or undefined. use || to check for truthyness


  • Revising TS with Full-Stack React, TS & Node.
  • Revising how private ,readonly and static keywords are used in classes.
  • static types cant only be accessed via the class name, therefore references to a class static property is forbidden. static type are therefore useful for sharing state across multiples instances of the same class.
  • Revising how inheritence works in JS. When inheriting a class, the super keyword is the reference to the parent class you are inheriting from.
    • the protected keywords means that the property is still private to the class but anything inheriting the class has access to the member.
  • namespaces in TS provided another level of encapsulation. Similar to classes in a way, but it is possible contain many classes.


  • Started reading Full-stack React TypeScript & Node. Revised TS basics and the writing OOP code with TS.
    • Encapsulation, hiding away internal details, TS has the private key word
    • Abstraction, related to encapsulation. Hide away the internal implementation details and provide and public API.
    • Inheritance, code reuse. JS uses prototypical inheritance so does not support inheritance from multiple “classes”. But with TS you can…
    • Polymorphism, related to inheritance. Create an object that is set to be of any specfic type. hard to do with JS but easy with TS interfaces.


  • Read “Do what Steve jobs did, not what he said”. An art drop-out to technologist is not someone following their passion. Sometimes you need to shift gears to find what you’re supposed to do.


  • Read more of So Good They Cant Ignore You. Most self help books promote the passion hypothesis, but “Follow you passion” may be bad advice…


  • Finished read Deep Work. Closing advice, Deep Work is not for everyone and requires some drastic changes to your habbits but the benfits of applying deep work have huge gains.

    “I’ll live the focused life becasue It’s best kind there is.”


  • Bit light hearted coding. Messing around with CSS transform propety to create parralax effects.
  • Read some more Deep Work. Set peoples expectations that they wont get necessary get a response back from emails to take the pressure away from consistently checking emails.


  • Read some more Deep Work:
    • categorise your work into shallow and deep work. Sometimes we just cant avoid shallow work, think emails, organising meetings. So set yourself a quota of how much time in a day you have for shallow work
    • email are pretty much the bane of deep work. You have more control than you think. switch off your emails during deep work and become hard to reach
    • ofc there is a balance between the person gets deep work done and also being a reliable person. Balance out how much time you can afford to switch off from the world.
    • Use “Fixed Schedule Productivity” so you can switch off at 5.30. Saying no is a meta habbit that must be adopted in order for you to get the deep work you want get done during the fixed hours of work you have scheduled. This means saying no to frequent tea breaks… This may also expose some hard truths about your work place.
    • Plan every moment of the day so not a minute is wasted. Things will inevitably change. rearrange your plan try to stick to it. Practical example.


  • Final touches to my Next.js NEVERBLAND project. Learnt more about how thr SWR component works. Generally feel like the whole React eco-system is great, but comes at the cost of exponential complexity.


  • Put some hours into the Next.js coding project. overhauled colours and put into practice all my css and design skills I have learnt; response design, design tokenisation, spacing and colour theory. Great to put it all into action. tired now


  • More work the Next.js project. MVP is complete. Feel like I have a good grasp on React now! Just need to add some flare to the project. Animations, transitions all that good stuff.
  • Read some more Deep Work. We underestimate how little time we have in one day. A highly productive person probably only has 4 hours of deep work time in one day, the rest of the day is shallow work. Plan every part of your day otherwise you will succumb to only doing shallow work.


  • spent pretty much all day working on my React/Next.js project for NEVERBLAND. Hitting a few nuiances with React, but getting over them pretty quick. Enjoying the whole Next.js Vercel experience. They’ve really nailed the user/dev experience down.


  • Worked on my coding project for NEVERBLAND. Getting a feel for react again, I dont hate it. It’s just different to Vue that’s all. I am really enjoying the Next.js though.


  • Learning about Next.js for a coding porject for NEVERBLAND.
  • Learnt about Next’s swr hook. Looks realy useful and reminds me a lot of how I was using Vue composition API.


  • Dabbling around postgresql and learning about json and hstores. Was able to port project data from firebase into a single supabase json record.


  • Read more Deep Work. Plan your evenings with meaningful activities, learn guitar or have a series of books ready to read. Don’t fall back to mindless social media scrolling. Learn to bored and not stimulated. This helps train your mind to concentrate.


  • Final touches and released my Nuxt vs Gridsome blog.
  • Tinkering with postgis and supabase. It just works!
  • Read some more Deep Work. Learn to memorise a deck of cards. World memorisation champions never use the “rote” method (the process of repeatedly reading to memorise). Instead they form an assosiation with say card. e.g Kind of diamonds = “donald trump”. They then form a story around the card. This process helps you form a scenario which your brain can memorise much more efficiently. This technique could be applied to memorising other things…


  • Read more Deep Work.
    • Practice being bored. Dont reach for the internet / smart phone at the first moment of boredom. Work through the boredom. Condition the mind.
    • Work like Teddy Roosevelt . Work intensely for short periods of time.

      Estimate how long you’d normally put aside for an obligation… give yourself a hard deadline that drastically reduces this time…There should only be one possible way to get the task done…working with great intensity

    • Meditate Productively. When doing something physcally demanding but not mentally demanding. Think intensely about the problem. Like in Mindfulness, when the mind wanders, bring back to the problem.
      • watch out for problem loops, exit out of them by acknowledge you’re in a loop and move on to the next problem
      • structure your deep thinking. Work in layers, progress slowly down the layers of the problem when thinking.


  • Four disciplines of deep work:

    • Focus on the wildy important - focus on small number of highly ambitious goals.

    • Act on the lead measures - these are bheviours that drive success o lag measures. Lag measures are restrospective results which come after apply a behaviour. A good example lead measure is to time spent in a state of deep work dedicated to your wildy important goals

    • Keep a compelling scoreboard - visualise your lead measures. Can be as simple as measuring how long of a time you are spending a in deep work state with card on a peice of paper.

    • Create a cadence of accountability - using your scoreboard, commit to a goal e.g 5 hours of deep work in a week.

    “It’s not so much the intensity of deep work but the regularity”

  • Be Lazy

    • Downtime aids insights - Your brain has two states. When conscious its like a dedicaed computer programe to work on particular task. When unconsious it’s like a google datacenter sorting through unstructured data forming patterns and connecting data together. Downtime helps your unconsious mind sort through new information and diversify the way you look at specific problems.
    • Downtime helps recharge the energy needed to work deeply - Concentration requires directed attention. You only have a finite amount. Think of it like a rechargeable battery. Rest recharges your directed attention energy. By not resting your robbing yourself of this energy.
    • The work that evening downtime replaces is usually not that important - You need a shutdown ritual , otherwise your mind will constantly be in a state that everything needs to get done. regualry resting improves the quality of deep work.


  • final tweaks on new blog and finally released chiubaca.com to the world 🎉. Site still needs a lot more work, but drawing a line under it’s development for now and trackng all further improvements to be made under the github issues. Time to work on something else now!


  • Working on the styling the gridsome version of my blog. Pretty much looks exactly the same as my Nuxt blog now! I have comparison between the static versions of both frameworks now. I should probably blog about building in both experiences.


  • Putting on the final touches of the dev.to gridsome source plugin. Think I can finally build my gridsome blog around this plugin!


  • Progress on gridsome source plugin. Cleanup on markdown files by extracting out front matter. Implemented a time to read computed property.
  • Read some more Deep Work. Allow for serendipity when possible and build an collaborative environment to foster this. An open environment for creativity is important but plan this space seperately to the space you need for deep work. Don’t merge the two together.


  • Worked more on my gridsome dev.to source plugin. computing markdown using remarked with some additional extras like code higlighting and auto link for headings. I would like to make better use of other gridsome plugins but just dont understand how they all slot together.
  • Read more Deep Work. Grand Gestures. Do something grand to trigger yourself to go into a deep work state. JK Rowling famously checked into to one of the most expensive hotels in scotland to write the last few chapters of the Deathly Hallows.


  • Tweaks to my gridsome source plugin. Added documentation. Thinking about if I should be adding in the extras computed properties or if that is outside the scope of the plugin. If can be done outside of the plugin perhaps that is the cleaner solution?
  • Read a small section of Deep Work. Ritualise your deep work time. Plan where you will work, how long. Sort out any prep before hand e.g get your water, coffee, snacks.


  • Worked on creating a Gridsome pluging with some success. Was quite a good experience. I think I can open source this plugin to retrieve articles from Dev.to. can enhance their API by having lots of additional properties like word count and time to read.
  • Got distracted with the dev.to gridsome plugin and went down a rabbit hole of learning about mdast (markdown abstract syntax tree). I really want to figure out how to deal with shortcodes and how to convert them into some meaningful HTML.


  • Happy New Years Eve!
  • Working more on my Nuxt blog. What a pain in the ass it is to roll your own blog. For now I’m just linking to my dev.to posts, but this feels like a cop out. I would like to figure out how to parse dev.to md files better and process shortcodes… But this is taking up so much time!


  • Going totally made rewriting my blog with nuxt. Driving me up the wall that I cant figure out how to parse dev.to shortcodes or find some sort of library that can do it. Looking into Gridsome, but perhaps for another time. Cheating and just put links to Dev.to instead for now and moving on with my life. Next, just got to make the site look a bit more presentable.


  • Spent hours rewriting my personal blog site nuxt. realised my approach to pulling down and saving md files to disk was’nt going to work in the context of github actions. When updating or deleting a file in github you need to know the sha of the file. So how do I keep track of the this sha? Need some sort of lookup table or something… Just seems overly complicated. Decided to make use of Nuxt asyncData which can let you pregenerate routes at build time which is pretty cool. Now trying to work out how to parse the contents of the dev.to post in a presentable way. Shortcodes are driving me nuts. Do I take the HTML which dev.to has generated? Or do I parse the md myself and try to figure out how to build widgets from shortcodes? Why is this so complicated? Might just take the easy way out for the time being and just do create links to my dev.to post for the time being. Figure this complexity out later.


  • Working on my Nuxt blog. Trying to write a script which pulls data from dev.to. Then using Nuxt content module it can build routes from the md files.


  • Hit with insomnia. Read some more Deep Work. Learnt about four different ways to apply Deep Work
    • Monastic : In the fashion on Monk. Completely seclude your self from everyone.
    • Bimodal : Divide time in your day for big chunks of dedicated deep work. Slightly more practical than the Monastic approach.
    • Rythmic : Similar to Bimodal, but taking into account of more breaks. For example go into deep work for 2 hour chunks at a time, have a break and go back into it. Similar to a pomodoro approach. Most compatible with normal day-to-day routines.
    • Journalistic: Journalists need to be able to switch into a deep work mode in an instance if there is breaking story that needs to be written up. This an advanced behaviour which will take practice. It means you can switch in and out of a deep work mode on demand.
  • Working on my dev blog. Quite interesting trying to use Dev.to as the CMS in conjunction with Nuxt and the content module. Trying to figure out to use webhooks on both Dev.to and Github actions to automate the publishing and updating workflows.


  • Final tweaks to vue + ts supabase slack clone and drawing a line under this mini project for now
  • Started working on my dev blog which I’m building with Nuxt.js and the content module.


  • It’s Christmas! Taking it easy today, eating lots and chilling out. Starting on a blog refelecting on how my 2020 has gone.


  • Xmas eve today! Starting to rebuild my personal blog site using Nuxt and the content module. Though 11ty looks great and barebones . I want to code my own blog with minimal effort and have a good DX.
  • Troubleshooting some errors with supabase vue slack clone too. Want to draw a liner under it soon. I’ve got a much better grasp of how to build realtime apps with supabase now.


  • Little bit of progress on the supabase vue slack clone. Working on styling via tailwind, not getting very far

  • Read more Deep Work. Understanding the pitfalls of deep work

    You have finite amount of willpower that becomes depleted as you use it

Feel like i’m reading atomic habbits. Dont rely on willpower, build up your habits to cultivate time for deep work.

  • “Monatic Philosophy of Deep Work Scheduling” - Minimize obligations to anything other than the things that is most important to thing you want to get done/learn . Say no to lots of people. Likely to impact your social identity in a big way. Not for everyone.


  • More work on supabase slack clone. ironing out some more tweaks. It pretty much all functional now. can switch between channels and messages, new users can signup. Need to make the UI look a little bit better. Happy with the groundwork so far though.


  • Good progress on supabase vue slack clone. Learning lots just from reading react source code. First time implmenting a Map data structure and i like it! All features are bascially working now. Just got lots of styling issues to resolve. Cody tidy up and improve the typings somewhat.


  • Good progress on on Supabase vue slack clone. Implemented adding new channels to the app.
  • Read more Deep Work. Deep work brings joy to your work. Deep work also brings meaning to your profession.

    A deep life is a good life, any way you look at it.


  • Spent way longer on Excalidraw for awesome free web tools repo than I should have done.
  • Read more Deep Work . Our brains will get conditioned to always responding to the next interesting thing when we dont practice deep work. Too much time out of deep work makes it harder to get back into it.


  • Made start of some more advanced topics of PostgreQL and started to learn about plpgsql (Postgres’ extension of SQL to make it much more advanced language). Made a simple calculator function.
  • Went on to install the plv8 postgresql extension and rewrote the calc in JavaScript, within PostgreSQL. 🤯. Then spotted that there is now also a way to run WASM binaries in postgres too. PostgreSQL land is wild!
  • Read a few pages of Deep Work. Emails are the bane of deep work. It cost companies literally millions of pounds of wasted productivity.


  • Completed Section 17. PostgreSQL Recipes which concludes the basics section at https://www.postgresqltutorial.com/. Good set of tips which I can refer back to, especial removing duplicates sections. EXPLAIN operation is fascinating, amazing that you postgres can break down the processes involved in a SQL operation, very useful for troubleshooting an inefficient script i suppose!
  • Got side tracked looking at installing other procedual languages on postgres - https://www.postgresql.org/docs/current/xplang-install.html . This has blown my mind that you can run other language for postgres functions, including Javascript!
  • Learning more about the difference with Postgresql functions and stored procedures. found some secret courses on creating functions and procedures. Also been reading up on how procedures cant return values like a function but can return a values via the use of INOUT on stack overflow. Now what’s interesting is that supabase can call stored procedures… Kinda like a serverless functions!?


  • Completed Section 15. Conditional Expressions & Operators , learnt about COALESCE to default to null, good thing learning TS taught be about nullish coalescing which made this makes sense. Also revised CASE for conditional statments, CAST using :: I think I now understand all the systax that is being used in the supabase slack clone starter SQL.
  • Also completed Section 16. PostgreSQL Utilities and have better idea of psql commands. Think I will stick to pgadmin though!


  • Read some more Deep Work and how Jack Dorsey the Ceo of Twitter and Square is an exception to the principles of deep work. He managed to solve hard problems with lots of distractions and encourages people to engage with him during the day. Bottom line, you’re probably not Jack Dorsey. Most people need to deep focus to produce deep work.


  • Finished Section 14. PostgreSQL Data Types in Depth on https://www.postgresqltutorial.com. Great module on Array type and also Hstore and JSON data management all in SQL. Got me excited how useful these data types will be for web apps like Jottivity. Also learnt about custom domans and types for which are useful for executing common queries as database functions. Mind blown. I feel silly after many years of working with RDBMS at a very shallow level I am only now fully realising the full capabilities of something like postgres!


  • Completed Section 13. Understanding PostgreSQL Constraints on https://www.postgresqltutorial.com. Great to finally have deeper undestanding of setting constraints of table for database level data checking. Also, learnt a lot more about Foreign keys which i thought i understood. Did’nt realise you can configure your child table records to automatically cleanup if the related parent record is removed via the CASCADE option (so useful!).
  • Started Section 14. PostgreSQL Data Types in Depth. I thought it might be too basic, but i’m still learning more of the intricacies of data types like Date and Timezone. really cool to see so many inbuilt functions to manipulate date and times to extrapoloate and skew data in all sorts of different ways.


  • Completed Section 12. Managing Tables. Lots to take in, too much to remember but good to have reference of all the types of operations avaialbe in postgres.
  • Read some more Deep Work.

    ” High-Quality work produced = (Time) * (Intensity of Focus)“. Get comfortable working in extended periods of deep work or get left behind.