21・11・24
- no entry yet
import types into a definition file is not intutive! https://stackoverflow.com/a/51114250
Exploring realtime options to build big-two
Leaning towards convex due to familiarity, but pocketbase for simplicity and portabability looks very compelling.
JS clis with clack
gd
and ci{
(and other ci
variant) . Just not fully using navigating a file with just hjkl
, mouse scroll still feels superior and smoother? Will keep at it for rest of this month.started learning neovim
free game assets:
Learnt about the Event Bus pattern a.k.a global signals.
Instead of letting child nodes handle their own signals, we centralise them at the highest point in a Godot game state,[ an autoloaded singleton. This creates a singleton (class) available to all nodes at runtime. If we set up signals in this class, we can invoke this signal in any node simply by referring to it any other node e.g. GlobalSignal.hit.emit()
. This also means that any other node can subscribe to the events of this Global signal like its broadcasting from the highest level. This pattern has been useful when constructing a HUD for a game. When my enemies die I need to emit how many points that were to show it on the UI, the HUD.
GLobalSignal
signal: points
/ \
/ HUD
/ - GlobalSignal.points.connect(update_score)
Enemy
- GLobalSignal.emit.points()
open rar files on mac with :
tar -xf archive.rar
https://superuser.com/a/1842470
rough notes from playing around with Godot so far:
Node
Nodes
can be organised into Scenes
Scenes
can also be composed into other Scenes
Nodes
communicate to with other nodes with signals
which send messages and/or arguments too.Quickly make a clone of your repo to a separate directory with git worktree
git worktree add ../copied-repo
Get The Screen Width & Height Without JavaScript
Astro.js just keeps getting better. Server islands is out performing next.js partial pre-rendering https://x.com/FredKSchott/status/1813995537089409268
handing streams from Open AI is tricky, few resources which look useful
Making a custom agen with OpenAI is fun. you can create a bot with a particular purpose quite quickly with just a pre-prompt. its a multi-part process though,
open ai keeps threads for a little while, but its proabally not a good persistant storage solution… So whats the solution to storing long threads? chuck it all in a DB?
https://community.openai.com/t/questions-about-assistant-threads/485239/3
node js has a lot of out-the-box functionality these days.
https://evertpot.com/node-changelog-cli-tool/ https://nodejs.org/api/util.html#parseargs-tokens https://nodejs.org/dist/latest-v20.x/docs/api/cli.html#—env-fileconfig https://nodejs.org/docs/v20.14.0/api/globals.html#fetch
I launched shibes.lol and its been a success, amazing to see my ShibaEveryHour audience participate and add their shiba photos submissions. Its been so much fun.
This project validate my fullstack cloudflare astro starter:
turbo development on shibes.lol . The focus, build a tool to submit shiba photos as quickly as possible. I still keep getting distracted by building non-important things, like basic auth and silly css 🤦.
Male Patent Boldness PREVENTION a. Saw Palmetto 320 mg daily on a full stomach b. Finestride 0.25mg 4 times week min (can have erectile dysf so take Rosemary Oil or Castor Oil). Use the spray version of Fin.
REGROW a. Scalp Massage, 5 mins min b. Minoxidil, apply daily min 1-2 times c. Microneedling / Derma Pen 1mm (press in, don’t roll), 1-2 times a week d. Gelatine supplementation (make hair thicker). Gelatin is made from animal collagen such as bones, skin, tendons and ligaments.
AVOID a. Processed foods, smoking & alcohol b. Dehydration, stress, poor sleep, showering with Fluoride
Note;
https://youtu.be/i7MLUy0Yac4?si=J0gck9vmXVhtheI4
Pairing Cloudflare transform zones with r2 image and unpic means i can transform images for free and have them optimised on the client with almost zero effort!
React 19 notes:
Thinking of more full-stack Cloudflare teething issues. Turns out its really hard to connect to d1 remotely? but how does drizzle do this with their d1 client I wonder? .
https://zzai.tech/blog/connect-d1-database-remotely-from-local-svelte-development
displaying r2 image objects as urls the hard way:
const APP_BUCKET = Astro.locals.runtime.env.APP_BUCKET;
const image = await APP_BUCKET.get("local-image", {});
const imageBuffer = await image?.arrayBuffer();
const imageBase64 = btoa(String.fromCharCode(...new Uint8Array(imageBuffer)));
const imageUrl = `data:image/png;base64,${imageBase64}`;
Display images from r2 the easy way.
r2.dev
domain - https://developers.cloudflare.com/r2/buckets/public-buckets/#managed-public-buckets-through-r2devhttps://pub-example-bucket.r2.dev/${r2-image-key}
note: link your custom domain is the preferred method when going to prod as you can use existing Cloudflare caching - https://developers.cloudflare.com/r2/buckets/public-buckets/#custom-domains
transform images with a Cloudflare images and their URL API: https://developers.cloudflare.com/images/transform-images/transform-via-url/
note: you must have a domain to be able to do this, once enable it exposes a sub path cdn-cgi
on your domain a.k.a zone e.g
https://chiubaca.com/cdn-cgi/image/width=80,quality=75/https://assets.chiubaca.com/PxcxZE6aGI_kLOEM6gq_400x400.jpg
more full stack cloudlfare with Astro. now moving into R2 which I wanted to originally test out for image assets . uploading images is straight forward enough, but reading the object again as images is proving to be tricky as we’re working with R2 objects which might need parsing, converting.. etc..
is https://www.cloudflare.com/en-gb/developer-platform/cloudflare-images/, a better fit for this? --- seems using the aws s3 client to interface with R2 is the way forward? -https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/client/s3/
Working on fullstack Astro Cloudflare starter. Trying to integrate Lucia auth with Drizzle and Cloudflare d1. Lucia docs are barebones and imprecise. Luckily this video helped a lot! https://www.youtube.com/watch?v=urAHMyBXM6kÏÍ
Discovering lots of weird quirks with Cloudflare Pages. The hardest thing to reason about is discrepancies between local and dev and production
The main thing to remember is local dev is not a Cloudflare runtime so we need to be conscious of library choices. This is not a bad thing really…
Environment variables are weird:
Pushing a local codebase to cloudflare pages, it absorbs the the envs you have specified in your .env
.
Using a GitHub branch deployment, environment variables specified can’t be access via Astros import.meta.env
. Instead they appear to be injected into Cloudflare runtime and can be accessed into the Astro.locals.runtime
. this object is available on astro file and middleware. https://docs.astro.build/en/guides/integrations-guide/cloudflare/#cloudflare-runtime
.dev.vars
file.https://thecopenhagenbook.com/
tsx lets you run any typescript file as easily as
npx tsx ./script.ts
https://tsx.is/getting-started
With the absolute failure of delete-fullstack-cloudflare. I’ve been exploring Astro.js as a full-stack web dev solution. Especially with the announcement of Astro Actions, it feels like Astro can stand its ground against the likes of Next.js.
New goal, get all these features working together in Astro:
Writing up notes on delete-fullstack-cloudflare . I want to build on Cloudflare pages with next js so much, but there are just too many gotchas and issues i’m running into. logged the following issues on the next-on-pages
repo:
Next to explore, Next.js with SST ? or Astro Actions ?? with cloudflare adapter?
Shiki is the code, highlighter built into Astro,
One challenge i’ve had a for a long time is code block line highlighting, but Shiki paired with [transformerNotationHighlight
](@shikijs/transformers | Shiki) it should be possible!
A few problems porting an existing next.js to cloudflare pages, works locally, but getting a cryptic error along the lines of :
Application error: a server-side exception has occurred (see the server logs for more information).Digest: 2354650810
Cloudflare logs reveal the following error:
"Error: The 'credentials' field on 'RequestInitializerDict' is not implemented.
lots people seeing this error pop in various forms, one thread thats seem related… https://github.com/cloudflare/workers-sdk/issues/2514
Big reason to move off Vercel to Cloudflare Page. free tier can be used for commercial use: https://community.cloudflare.com/t/cloudflare-pages-for-commercial-use/306890/2
nice guide to work with nextjs and cloudflare pages https://logsnag.com/blog/deploying-nextjs-13-app-dir-to-cloudflare-pages
Thank you Netlify. You’ve been amazing over the years. But Cloudflare pricing, reliability and general future outlook can’t be denied. The migration of my projects to Cloudflare has begun. notes.chiubaca.com and chiubaca.com have been ported over seamlessly.
Thinking about calmness and this blog
I love the idea of a calm company
So many great announcements from cloudflare https://blog.cloudflare.com/pages-workers-integrations-monorepos-nextjs-wrangler#improved-nextjs-support-for-pages
Keep thinking about this Netlify horror story.
Considering :
high level steps for getting a mockup via their rest API
Get all products: https://api.printful.com/products e.g we’re interested in stickers:
{
"id": 505,
"main_category_id": 202,
"type": "STICKER",
"description": "These glossy stickers are designed just for you. Give your favorite items, like a notepad, laptop, or phone case an extra glow and personal touch. \n\n• Film type: MPI 3000 Gloss HOP (EU), Promotional Monomeric PVC (US)\n• 0.12″ (0.3 cm) white sticker border \n• Glossy finish\n• Fast and easy application\n• 2–3 year durability\n• Indoor use\n• Blank product sourced from Japan\n\nDon't forget to clean the surface before applying the stickers.",
"type_name": "Kiss Cut Sticker Sheet",
}
Use id505
to make the next request:
https://api.printful.com/products/505
This returns full Printful stock, variants and other useful details about this product. Most importantly keep notes of the variant IDs, eg "id": 12917
.
Use the product id again to get printfiles for this product: https://api.printful.com/mockup-generator/printfiles/505
e.g response:
{
"code": 200,
"result": {
"product_id": 505,
"available_placements": {
"default": "Print file",
"background": "Background"
},
"printfiles": [
{
"printfile_id": 256,
"width": 1750,
"height": 2482,
"dpi": 300,
"fill_mode": "fit",
"can_rotate": false
},
{
"printfile_id": 259,
"width": 1822,
"height": 2554,
"dpi": 300,
"fill_mode": "cover",
"can_rotate": false
}
],
"variant_printfiles": [
{
"variant_id": 12917,
"placements": {
"default": 256,
"background": 259
}
}
],
"option_groups": [
"Flat",
"Holiday season",
"Lifestyle",
"Lifestyle 2",
"Lifestyle 3",
"Lifestyle 4",
"Spring/summer vibes",
"Valentine's Day",
"template"
],
"options": [
"Background",
"Christmas",
"Christmas 2",
"Front",
"Spring/Summer",
"Spring/Summer 2",
"Valentine's Day",
"Valentine's Day 2"
]
},
"extra": []
}
we’re now ready to form a mockup generator request: https://api.printful.com/mockup-generator/create-task/505
This is a POST
request which need a body with a payload like:
{
"variant_ids": [
12917,
],
"format": "jpg",
"files": [
{
"placement": "default",
"image_url": "https://clever-stork-292.convex.cloud/api/storage/9fdcfdeb-ee06-47a2-83ff-01184c939d0d",
"position": {
"area_width": 1800,
"area_height": 1800,
"width": 1800,
"height": 1800,
"top": 0,
"left": 0
}
},
]
}
this will return a task that can be polled at an interval:
{
"code": 200,
"result": {
"task_key": "gt-645099290",
"status": "pending"
},
"extra": []
}
https://api.printful.com/mockup-generator/task?task_key= + {task_key}
TODO:
figure out out to automate placing an order:
https://api.printful.com/orders POST w/ works:
{
"external_id": "sticker-2-4235234213",
"shipping": "STANDARD",
"recipient": {
"name": "John Smith",
"company": "John Smith Inc",
"address1": "19749 Dearborn St",
"address2": "string",
"city": "Chatsworth",
"state_code": "CA",
"state_name": "California",
"country_code": "US",
"country_name": "United States",
"zip": "91311",
"phone": "string",
"email": "[email protected]",
"tax_number": "123.456.789-10"
},
"items": [
{
"id": 1,
"external_id": "sticker-item-2",
"variant_id": 1,
"quantity": 1,
"price": "13.00",
"retail_price": "13.00",
"name": "Kiss Cut Sticker Sheet",
"product": {
"variant_id": 12917,
"product_id": 505,
"image": "https://printful-upload.s3-accelerate.amazonaws.com/tmp/71dbbab32afaf7a761f7a28b91ac9268/kiss-cut-sticker-sheet-white-front-6600501f5d2c0.png",
"name": "Kiss Cut Sticker Sheet"
},
"files": [
{
"type": "default",
"url": "https://printful-upload.s3-accelerate.amazonaws.com/tmp/71dbbab32afaf7a761f7a28b91ac9268/kiss-cut-sticker-sheet-white-front-6600501f5d2c0.png",
"options": [
{
"id": "template_type",
"value": "native"
}
],
"filename": "shirt1.png",
"visible": true,
"position": {
"area_width": 18,
"area_height": 2400,
"width": 1800,
"height": 1800,
"top": 300,
"left": 0,
"limit_to_print_area": true
}
}
],
"options": [
{
"id": "OptionKey",
"value": "OptionValue"
}
],
"sku": null,
"discontinued": true,
"out_of_stock": true
}
],
"retail_costs": {
"currency": "USD",
"subtotal": "10.00",
"discount": "0.00",
"shipping": "5.00",
"tax": "0.00"
},
"gift": {
"subject": "To John",
"message": "Have a nice day"
},
"packing_slip": {
"email": "[email protected]",
"phone": "+371 28888888",
"message": "Message on packing slip",
"logo_url": "http://www.your-domain.com/packing-logo.png",
"store_name": "Your store name",
"custom_order_id": "kkk2344lm"
}
}
Being going deep into Stripe, Stripe.js and Stripe elements. The killer feature of Stripe appears to be Stripe elements to let you embedded checkout element anywhere on your website to build custom checkout flows. LemonSqueezy on the other hand only seems to have the Hosted and Overlay flows. Once they have a similar product to elements it could be a real Stripe killer.
Stripe docs are great, especially how when you scroll and parts of the code gets highlighted. Stripe uses https://markdoc.dev/ behind the scenes. Something similar could be created with https://expressive-code.com .
AbortController is a web standard for aborting requests via a signal
.
This is quite important in React 18 due to the double firing of useEffects in dev mode. We probably want to cancel redundant request when components mount and re-mount.
useEffect(() => {
// Setup our abortController
const abortController = new AbortController();
const getAsyncData = async () => {
try {
const res = await fetch(`/api/some-data?query=${queryParam}`, {
method: 'POST',
signal: abortController.signal,
});
const json = await res.json();
const clientSecret = json.clientSecret;
setStripeState({ state: 'success', clientSecret });
} catch (e) {
if (!abortController.signal.aborted) {
setStripeState({
state: 'error',
message: 'Unknown error',
});
}
}
};
// if the component unmounts or he queryParam changes we can cancel the fetch request like this:
return () => {
abortController.abort();
};
}, [queryParam]);
https://volta.sh/ is a better nvm alternative. it detects the node version in your project if engines
has been configured in your package.json
.
"engines": {
"node": "18.x"
},
excited about ElectricSQL - Sync for modern apps (electric-sql.com)
grep.app | code search search for coding examples in github repos.
Integrated the same demo from SDXL Lightning - by fal.ai (fastsdxl.ai) into sticker-ai project and it’s amazing?! real time image generation is a game changer in terms of UX for projects that are generating images from text.
excellent directory of tailwind animations animation.ibelick - Tailwind CSS animation gallery
Using convex to build sticker-ai and it’s a lot of fun! Getting into the weeds of how convex works is interesting. Its patterns and principles makes it a delight to work with. Things I like about it so far:
/convex
things that are confusing:
Reading resources on openai.com, so much good stuff:
( shit lol, this was my whole idea)
convert bitmap to svg -Potrace (sourceforge.net)
Replicate.com is such an amazing platform to using and training AI models. So exciting exploring around and tinkering with fine tunings and existing models. Questions I have though..
Open AI also has the ability to fine tune…
GPT4All from nomic ai lets your run models locally hassle free
got access to Fly GPUs · Fly , this makes serverless AI possible.
Researching more into contracting life and what it entails.
Points from Guide to software engineering contracting in UK (codedeepdives.com)
Other things i dont understand yet.
Notes from GPT-4
Certainly! As a software engineer contractor, organizing your work effectively with clients is crucial for successful project delivery. Here are some essential tips to help you stay on track and maintain a positive client relationship:
Clear Communication:
Detailed Requirements Gathering:
Agree on Milestones and Deadlines:
Contract and Legal Aspects:
Regular Work Reports:
Risk Management:
Quality Assurance and Testing:
Documentation:
Feedback and Iteration:
Professionalism and Boundaries:
Remember, every client is unique, so adapt these principles to suit the specific project and client dynamics. Building strong client relationships will lead to repeat business and referrals in the long run! 🌟
Learn more
1arc.dev2teamwork.com3bamboogroup.eu4geeksforgeeks.org5careerkarma.com6computer.org7en.wikipedia.org8ryrob.com9fsb.org.uk10thedevbazaar.com11zeroin.me12indeed.com+10 more
Paperwork templates: Free Contract Templates and Agreements (Word or PDF) | Signaturely
Excited about counterscale: Scalable web analytics you run yourself on Cloudflare (github.com)
Mahonia aquifolium - can supress inflammation and slow down growth of skin cells
Indigo naturalis - also known as qing dai, is a traditional Chinese herbal medicine
Aloe vera - antibacterial and anti-inflammatory properties. They may help soothe the skin and fight bacteria that could cause infections.
height: 100vh
is not that great becasue content will get chopped off by mobile phone bottoms bars. I’ve got around this this with:
html,
body {
margin: 0;
padding: 0;
height: 100%;
width: 100%;
}
However 100svh
should also solve this problem too
It’s amazing to see best tooling and libraries transcend into standard web APIs. In musical OSM I have been able to drop Sass for vanilla css with zero code changes
turn this into a JS game Figen on X: “Physics game. https://t.co/HmmvYGolsP” / X (twitter.com)
- Revived Musical OSM by proxing requests from planet-osm via netlify edge function and works beautifully again. Also too the op to simplify the code massively .
possibly the best drawer component i’ve seen…
A GPT trained on geoscience code papers. Could it helper write geospatial queries?GS
Looking into installing Stable Diffusion locally and steps seem to have gotten easier two main options right now that offer a nice GUI over SD is :
Comfy UI looks more powerful but AUTOMATIC1111 seems to be an easier install. Trying the latter for now.
Not used Netlify functions in a while and it’s improved a lot! Including first class support for TypeScript.
I was able to build a proxy server to append additional headers to a image response very quickly. ArrayBuffers and Blobs still confuse me quite a bit in Node.js land.
we fall down
we learn
we stand up
syntax-tree/unist-util-visit: utility to visit nodes (github.com) is a handy utility to recursively walk a syntax tree.
Learning more about Git Submodules) . Its meant to be used to link git repo together if they’re dependencies of each other.
Add a submodule to an existing git repo is as straight forward as this command
git submodule add https://github.com/chaconinc/DbConnector
This seems perfect for chiubaca-monorepo where I need to link it to my notes repo. The last piece of the puzzle is if I can get a sync working between that repo and my astro codebase. I have not been successful with symbolic links on windows.
Astro.js has a built in mechnism to manipulate markdown via rehype plugins. These are functions that manipulate markdown AST which makes it straightforward to build your own ones. I’m doing this in chiubaca-mono to convert my obsidian image and markdown reference so paths are corrected when published to my Astro websites.
Building UIs with nested segments may not need to be built using recursion.
Reading up on how webmentions work. I didn’t realise it is a w3c spec. Getting started with Webmentions ♂️ [2022 Tutorial] (daily-dev-tips.com)
Reading Perfectionism and Procrastination: How They’re Connected and What to Do About It – Solving Procrastination reminded me of 2023-10-20.
Happy new year!
Feeling optimistic that this is the year one of side-projects can turn into a real side-hustle.
v2 of chiubaca.com and notes.chiubaca.com is live all styled up! Still a few minor issues and features i’d like to get in place but its 90% there. I’m ready to write more in 2024!
Very happy with the work I’ve done in consolidating a bunch of code for my websites on *chiubaca.com
into a single nx mono-repo at https://github.com/chiubaca/chiubaca-monorepo . My favourite aspect is that both notes.chiubaca.com and chiubaca.com are driven 100% my notes in this obsidian vault. This is made possible because both obsidian and Astro.js treat yaml frontmatter as first class citizens within their products so the I can do almost anything I want in terms of data manipulation as long as I have consistent frontmatter across all my notes. A big write up of this is needed
Configure Cloudflare DNS to Work with Netlify (stevepolito.design)
GitHub fine-grained personal access tokens play nicely with Tokens(classic) which is nice. So no code refactoring required if you’re using https://github.com/octokit/octokit.js.
Invoke GitHub actions manfully with workflow_dispatch
name: Create Empty Daily Note
on:
# run workflow every morning at 6AM
schedule:
- cron: "0 6 * * *"
# run this workflow manually from the github Actions tab
workflow_dispatch:
*.chiubaca.com
sites over to [chiubaca/chiubaca-monorepo (github.com)](https://github.com/chiubaca/chiubaca-monorepo. Solidifying a lot of my knowledge around setting up NX , Astro and general web tooling. Need to write a big write up how how I’ve totally over engineered this whole setup.Lets get back to journaling! starting a new job has naturally been crazy! Things I want to write about:
Advanced browser debugging tricks
Been working with QGIS more and more at work and it continues to delight. The model designer is as powerful as ArcGIS model builder. Managed to automate a raster processing pipeline that converts to pixels to geojson points all through the model designer without needing to read any documentation.
Control all monitors brightness on windows - emoacht/Monitorian: A Windows desktop tool to adjust the brightness of multiple monitors with ease (github.com)
Its been fun revisiting A-frame after all these years, funny how i was able to hack away at a mapping demo so early on in my coding journey and now i’m being paid to explore to a real life AR mapping use case.
AR.js Location Based has been very fun to work with. It’s now a rabbit hole that will make me want to explore Aframe and three.js deeper…
One major benefit of Maplibre over Mapbox is that we can add [custom terrain layers sources](Custom MapLibre Terrain Tiles | bert (wordpress.com))
x
/y
/z
file structure. [mbutil](mapbox/mbutil: Importer and Exporter of MBTiles (github.com)) can help with this.[But what if you had to create a raster data set from height points?](Interpolating Point Data — QGIS Tutorials and Tips)
[console logs viewer](markknol/console-log-viewer: Displays logs and Javascript errors in an overlay on top of your site. Useful for mobile webdevelopment. Enabled in 5 seconds (github.com)) perfect for debugging in mobile environments
client side background remover that uses wasm - imgly/background-removal-js: Remove backgrounds from images directly in the browser environment with ease and no additional costs or privacy concerns. Explore an interactive demo. (github.com)
shaders and webgl stuff
The upside of imposter syndrome
“In those moments where you feel like an imposter… you realise ‘I have something to prove’, so you’re not complacent,”
The Unexpected Benefits of Doubting Your Own Competence
Vector tiles stuff
3D GIS
How to solve cors issue in geoserver || GeoServer || GeoDev - YouTube
Navigate to your geoserver installation e.g C:\Program Files\GeoServer
and look for the web.xml
file under \webapps\geoserver\WEB-INF
uncomment the the filter
settings . There is a comment <!-- Uncomment following filter to enable CORS in Jetty. Do not forget the second config block further down. -->
no need to comment out the cors setting regarding apache Tomcat
there is one more block which has a comment <!-- Uncomment following filter to enable CORS -->
. Enable the code below:
wsl
e.g wsl ls
.Vector tile questions:
Overview of Vector Tiles - YouTube
https://www.mapzen.com/projects/vector-tiles
Mistakes I won’t make again:
Ignoring bugs that I can’t reproduce consistently, they’re the most important to fix.
Being so burnt out that fail to tell others I’m burnt out.
Not asking for support when I need it.
Not pushing for planning when I know we really need it
Pushing ahead with no real plan.
open source notion like text editor - https://www.blocknotejs.org
full stack typescript alternative to firebase - https://www.convex.dev/
Typescript projects like ts-sql blows my mind.
Looking at more modern web mapping tech choices
$()
(get one element handle) or $$()
(multiple handles) to parse the block and extract data from a DOM element.Jason Fried on company culture:
Culture simply happens. It’s emergent behavior. There’s nothing to do, it just is.
A company’s culture is a 50-day moving average. It’s what you’ve been collectively doing as a company over the last 50 days.
Agency life has been tough. No doubt about it. Reflecting on whats been good and whats been bad
The good:
The bad:
Great to be getting excited about Geospatial things again. Starting to think about what a modern GIS stack looks like? When I last looked it was PostGIS , GeoServer , Leaflet / Mapbox. New things I want to explore. Vector tiles / ProtoMaps, DeckGL .
How to Use the Gmail SMTP Server to Send Emails for Free
react instant search hooks
and I’m very impressed. They’ve thought of everything. This small bit of code does anything you want a search ui to doimport {
Configure,
Highlight,
Hits,
InstantSearch,
Pagination,
RefinementList,
SearchBox,
} from 'react-instantsearch-hooks-web';
import { history } from 'instantsearch.js/es/lib/routers';
const routing = {
router: history(),
};
const searchClient = algoliasearch(
'A5NH1O01I7',
'26c89c07aa764a90782aff6ffba5ac66'
);
function Hit({ hit }) {
return (
<article>
<h1>
<Highlight attribute="post_title" hit={hit} />
</h1>
</article>);
}
const Page = () => {
return (
<div>
<InstantSearch
searchClient={searchClient}
indexName="my_first_index"
routing={routing}>
<SearchBox
classNames={{
root: 'h-3',
}}
/>
<RefinementList attribute="categories" />
<Hits hitComponent={Hit} />
<Pagination />
</InstantSearch>
<Login />
</div>
);
};
this gives a search box which populate the ui with results . we can filter by facets and there is text highlighting. to overidden classes we can use tailwind with classnames
prop they provide. each class can be overridden.
Reading React Essentialsfor Nextjs 13.
children
or a custom prop, a server component can be passed through here. This means something like this is possible
`<RootServerComponent>
<ClientComponent>
<ChildServerComponent />
</ClientComponent>
</RootServerComponent>
server-only
package which can be installed with npm install server-only
and imported in like so:import 'server-only';
use client
component eg:'use client'; import { ThemeProvider } from 'acme-theme';
import { AuthProvider } from 'acme-auth';
export function Providers({ children }) {
return (
<ThemeProvider>
<AuthProvider>{children}</AuthProvider>
</ThemeProvider> );
}
Next js Framework Environment Variables
React.use()
written up as a permanent note quick-look-at-react-use-hook .Notes from React RFC for first class support for promises
use
hooks. This hook is special. For one thing. It can be used conditionally, which breaks one of the rules of hooks.function Note({id, shouldIncludeAuthor}) {
const note = use(fetchNote(id));
let byline = null;
if (shouldIncludeAuthor) {
const author = use(fetchNoteAuthor(note.authorId));
byline = <h2>{author.displayName}</h2>;
}
return (
<div>
<h1>{note.title}</h1>
{byline}
<section>{note.body}</section>
</div>
);
}
reminder of how to handle double fetching in client side react when using useEffect
useEffect(() => {
let ignore = false;
async function startFetching() {
const json = await fetchTodos(userId);
if (!ignore) {
setTodos(json);
}
}
startFetching();
return () => {
ignore = true;
};
}, [userId]);
revisiting 20211101 and trying to keep footers in the correct location. This works well
<body>
<main><!-- Main content --></main>
<footer><!-- Footer content --></footer>
</body>
body {
min-height: 100vh;
display: flex;
flex-direction: column;
}
footer {
margin-top: auto;
}
rewriting jweidesigns.com with next 13 app dir today some thoughts:
type BlogSlugPageProps = {
params: { slug: string };
};
export default function BlogSlugPage({ params }: BlogSlugPageProps) {
const slug = params.slug;
return <>Slug page | {slug}</>;
}
turns out this the more informative doc that explains how route groups work https://nextjs.org/docs/app/building-your-application/routing/route-groups
Not kept up my notes for soo long because of work projects! 😭
variants
. When we key our animations objects into a variants
object like so…const container = {
hidden: { opacity: 1, scale: 0 },
visible: {
opacity: 1,
scale: 1,
transition: {
delayChildren: 0.3,
staggerChildren: 0.2
}
}
}
… the magic behind framer motion is that it knows how to tween between the two animation states when you provide the keys of the variants objects into Framer component props e.g animate
import {motion} from 'framer'
...
<motion.div> I behave like a regular div </motion.div>
The styles
tag/prop has superpowers now and can accept “motions values”
function Component() {
const x = useMotionValue(0)
useMotionValueEvent(x, "animationStart", () => {
console.log("animation started on x")
})
useMotionValueEvent(x, "change", (latest) => {
console.log("x changed to", latest)
})
return <motion.div style={{ x }} />
}
in this example, the x
value can be passed into style
and the component can magically animate the x
value of the component without tonnes of re-renders. Framer does its animations outside of the React re-rendering process
import * as React from 'react'
type ButtonProps = React.HTMLProps<HTMLButtonElement>
const FancyButton = React.forwardRef<HTMLButtonElement, ButtonProps>((props, ref) => (
<button type="button" ref={ref} className="FancyButton">
{props.children}
</button>
))
// You can now get a ref directly to the DOM button:
const ref = React.createRef<HTMLButtonElement>()
<FancyButton ref={ref}>Click me!</FancyButton>
The Monkey C VS code extension is key
if java
is not picked up in the terminal download java from here - https://www.oracle.com/uk/java/technologies/downloads/#jdk19-mac
tutorials:
git-sim is a tool to visualise your git actions. installing on a mac m1 is awkward though:
- brew install py3cairo ffmpeg
- pip3 install manim
- pip3 install git-sim
then we can run something like like git-sim merge main
Remix pushes you to use <form/>
to make calls to the server. We’re probably more used to using post
with something like fetch
or axios
but remix teaches you of vanilla form actions. When tallied up to an action
function to process the function server side it makes for a really tidy DX. Interesting form actions work with the browser <form>
element which will cause a full page reload. If you need a more SPA feel they also provide a Remix <Form>
components.
random tinkerings:
storybook compositions is the only way to go if you want to test different frameworks right now
remix vs next.js
a little gotcha with ReactQuery onMutate
is different from mutationFn
! Reminder to always read the docs!
https://tkdodo.eu/blog/mastering-mutations-in-react-query#some-callbacks-might-not-fire
yarn add -D @nx-tools/nx-prisma
The user facing software spectrum
good things about Remix
Load the data right in the same component. (next.js 13 can do this too now…)
Mutation with form actions handles server-side write functionality cleanly.
confusing things about next.js:
.cjs
and .mjs
- https://github.com/privatenumber/pkgrollnx: command not found error
when a div is absolutely positioned and you want it to fill its parent you can use inset: 0
which is the same as :
top: 0;
right: 0;
bottom: 0;
left: 0;
CSS radial gradients can get complicated. At its most basic its a function that can take as many colour properties as you want and it will distribute it evenly in circular fashion.
background: radial-gradient(red, green, blue);
We can add a percentage that defines each colour’s stop points. CSS blends the colours for us.
background: radial-gradient(red 10%, green 20%, blue 80%);
ok, so is also a secret optional preliminary argument too… MDN documents the radial-gradient
function like so:
radial-gradient( [ <ending-shape> || <size> ]? [ at <position> ]? , <color-stop-list> )
<ending-shape>
- can either be circle
or elipse
, elipse
is basically just a stretched circle to match the aspect ratio of the element it’s in.
<size>
- has four options documented here
closest-side
closest-corner
farthest-side
farthest-corner
- default<position>
- defaults to center
but supports an x
and y
. This is the param that we can make dynamic via CSS variable to achieve interesting effects like a spotlight effect
so our simple CSS radial gradient could now look like this:
background: radial-gradient(farthest-side circle at var(--x, 0%) var(--y, 10%), red 10%, green 20%, blue 80%);
Discovered the css perspective
property today which when used in tandem with tranform: rotate()
can create some really cool 3d effects perspective
must be applied to to the parent of the element on which you are running the transform property on.
rotate a svg
To animate a group of SVG elements and make them spin indefinitely, you can use the <g>
element to group them together, and then use the <animateTransform>
element within the <g>
element to animate the group as a whole. Here is an example that shows how to create an infinitely spinning group of SVG elements:
<svg> <g id="group"> <rect x="10" y="10" width="100" height="100" /> <circle cx="50" cy="50" r="40" /> </g> <animateTransform xlink:href="#group" attributeName="transform" type="rotate" from="0" to="360" dur="3s" repeatCount="indefinite" /> </svg>
In this code, the <g>
element is used to group the <rect>
and <circle>
elements together, and the <animateTransform>
element is used to animate the group by rotating it indefinitely. The xlink:href
attribute of the <animateTransform>
element specifies the target of the animation (in this case, the <g>
element with the id
of "group"
). The other animation attributes, such as attributeName
, type
, from
, to
, dur
, and repeatCount
, are used to control the animation in the same way as described in the previous answer.
rotate from the center of the svg
To rotate an SVG image from its center, you can use the transform-origin
attribute and specify the center point of the rotation. Here is an example that shows how to rotate an SVG image from its center:
<svg>
<image xlink:href="image.svg" transform="rotate(45)" transform-origin="50% 50%"
/>
</svg>
In this code, the transform-origin
attribute is set to "50% 50%"
, which specifies the center of the rotation (i.e. the center of the image). The transform
attribute is used to specify the rotation angle (in this case, 45 degrees).
Alternatively, you can use the <g>
element and specify the rotation transformation and the transform-origin
attribute on it, like this:
<svg>
<g
. transform="rotate(45)"
transform-origin="50% 50%">
<image xlink:href="image.svg" />
</g>
</svg>
In this case, all the elements within the <g>
element will be rotated by 45 degrees around the center of the group.
The most basic carousel if probably with just a css overflow and snapping . But the moment you want “drag” functionality, this logic doesnt work and we need to work with transforming a div by “sliding” it along an x-axis. Once we do this we need to do all the hard work of figuring out the left-x position of every div slide. These become our slide points which can be used for writing our own snapping functionality and pagination
function to find closest number given an array of numbers
and got :/**
To find the closest number to a given number in an array of numbers, you can use a combination of the Math.abs() and Math.min() methods.
The Math.abs() method returns the absolute value of a number, which is the number without its sign (positive or negative). This is useful for comparing the distance between two numbers, because the distance between two numbers is always positive.
The Math.min() method returns the smallest number in a given array of numbers. This is useful for finding the smallest absolute difference between a given number and the numbers in an array.
Here is an example of a function that uses these methods to find the closest number to a given number in an array:
*/
function findClosestNumber(numbers, target) {
const distances = numbers.map(number => Math.abs(number - target));
const closestIndex = distances.indexOf(Math.min(...distances));
return numbers[closestIndex];
}
const numbers = [1, 5, 10, 15, 20];
const target = 12;
findClosestNumber(numbers, target)
// nearest will be 10
.layout {
display: grid;
grid-template-columns:
calc( 50vw - 500px )
repeat( 8, 1fr )
calc( 50vw - 500px);
}
This trick means we have an inner grid of 8 columns that is constrained to just 1000px . as the outer gutters are half the size of the screen minus max-screen size / 2 . For some reason the maths checks out!
Messing around masking with images with SVGs. Turns out that when you wrap an SVG in clipPath
tricks to make SVGs responsive to their parents go out the window. In fact, I’m not sure the SVG even affects the layout, it kinda just blurs into the background and becomes an object that can be embedded into objects.
There are new tricks to make the SVG clip scale to the image - https://cssfordesigners.com/articles/clip-path-scaling
useEffect
required.
const [count, setCount] = React.useState(1);
const [tuple, setTuple] = React.useState([null, count]);
if (tuple[1] !== count) {
setTuple([tuple[1], count]);
}
set default nvm version:
nvm alias default 6.11.5
if you want it pegged to that specific version.
You can also do nvm alias default 16
or nvm alias default node
.
Either way, you’ll want to upgrade to the latest version of nvm
(v0.39.2 as of this writing)
# nvm set default node.js version 16.14.2
$ nvm alias default 16.14.2
$ nvm use
$ node -v
# v16.14.2
Add to calendar functionality is surprisingly nuanced. Best react lib for doing this i’ve found so far is probably react-add-to-calendar-hoc
You could implement this yourself. That hardest part is creating an API that normalises the construction of the URL params across all calendar providers. implementation details on this codesandbox
X-Frame-Options: DENY
There are interesting hacks around this to bypass the X-Frame settings. But probs not a good idea. The better solution would be to pre-render a website server side and proxy the content on your own domain.I assume this is how something like a pocket works.
localhost
on a docker container is scoped to it’s own internal network. So if you want localhost
to refer to the “HOST” i.e the machine running docker, on mac we can use host.docker.internal
-
2022-11-05T18:35:07.117 app[c84f0e2a] lhr [info] [ioredis] Unhandled error event: Error: getaddrinfo ENOTFOUND fly-medusa-redis.upstash.io
following chats indicate that it’s because the node app service needs to have it’s redis client configured to accept an IPv6 connection.
docker-compose.yml
file. As basic one looks like this:# version of docker compose
version: '3'
# We can setup multiple services in one go
services:
# provide the name of the service, can be anything
node-app:
# everything below here are the same docker cli commands
build: .
port:
- "3000:3000"
volumes:
# docker compose let you use relative path for volumes
- ./:/app
# this is a trick so node_modules dont get overridden
- /app/node_modules
environment:
# we can provide envs either explcity or with env file
- PORT=3000
# env_file:
# - fileName
execute by running docker-compose up
. To tear down docker-compose down
additionally provide -v
flag to remove related volumes.
docker-compose
also builds the image and will cache this step if it can.
note: if you update your Dockerfile, docker-compose
is not smart enough to know to rebuild the image. use the --build
flag to force a rebuild of the image
its possible to have multiple docker-compose.yml
files perhaps you want a docker setup for dev and another for production.
setup docker-compose.dev.yml
and docker-compose.yml
- https://www.youtube.com/watch?v=9zUHg7xjIqQ&t=4896s
docker-compose -f docker-compose.yml docker-compose.dev.yml -d
which will merge the docker-compose files together. the later script will override the former scriptyou can have if else statements in your Dockerfile
ARG
are build time environment variables
Continuing with docker lesson at https://www.youtube.com/watch?v=9zUHg7xjIqQ&t=1596s
You don’t even need to copy node_modules
over to your docker image. Remember to use a .dockerignore
as basic one looks like this:
node_modules
Dockerfile
.dockerignore
.git
.gitignore
docker run -v <full-path-from-host>:<docker-path> -p 1234:3000 -d --name node-app node-app-image
ro
-v <full-path-from-host>:<docker-path>:ro
.
this means docker cant create new files on the host machine.
$
syntax in our DockerfileEXPOSE $PORT
This mean can use the following cli flag when running the container `docker run -env PORT=<PORT-number>`
A basic docker script for a node.js app:
# Mandatory, every dockerfile must be based on another image
FROM node:16
# This sets the current working dir, all further action will be made relative to this path
WORKDIR /app
# We copy the package.json and install before deps first for performance reasons.
# This is the longest step so dependencies are cache for subsequent steps
COPY package.json.
RUN npm install
# We can now copy the rest of the source code across, we don't need re-run npm install as dependencies are installed already
# '. .' looks kinda confusing. This just means copy everything from dev working dir to the docker working dir
COPY . .
# This doesnt actually do anything and acts more as documentation that port 3000 is being exposed and can be port forwarded for external access
EXPOSE 3000
# run the bash commands that will start the app
CMD ['node', "index.js"]
docker caches these steps so a second run will generally be faster
to build an image run
docker build . -t node-app-image
.
is the path of where the Dockerfile is
-t
the name of of the image
to run a built image
docker run -p 3000:3000 -d --name node-app node-app-image
p 3000:3000
is how we forward ports from the host machines to correct exposed port on the docker container.
Finally getting a chance to tinker with fly.io!
flyctl postgres create
If you want to connect to a fly.io postgres with PgAdmin you need to proxy the internal url to localhost and on specfy port with the command
fly proxy 6543:5432 -a app-name Proxying local port 6543 to remote [app-name.internal]:5432
this exposes internal fly.io db to localhost:6543
success is not linear
Some real life stuff here.
— Marshall Long (@OGBTC) October 23, 2022
"Success isn't linear" by Yoann Bourgeois pic.twitter.com/GQJj4ztXWM
Classes in TS are still confusing to me, especially when dependencies are automagically injected into classes for you
Learnt an Interesting pattern of taking injected dependencies and binding them to your local class.
when creating methods in Medusa custom services, these are exposed to your API, loaders and subscriptions which means your services are a giant class full of business logic. There’s lots of magic around dependency injections and a new term of dependency container as under the hood Medusa makes heavy use of Awilix.
hit a pretty bad bug where I cant create products to the medusa database!? - https://github.com/medusajs/medusa/issues/2034
Evaluated a few more Open API typegenerators
tinkering around with how medusa plugins work. They effectively let you isolate logic which has a specific purpose whilst still giving you the full hooks and features of the medusa server, the core building blocks we can work with are:
api
these custom URL endpoints which can run any arbitrary code when hitservice
, the most important aspect which is a class in that you can implement custom business logic in methods. Medusa exposes these methods to all aspects of your plugin including the api
and loaders
loader
from what I understand is a functionality that you can call on a certain lifecycle. such as the startup of the plugin. You can even time to run via a cron job. - https://docs.medusajs.com/advanced/backend/cron-jobs/create/#2-create-cron-jobplugins are faff to work with right now, so I’m working around it by developing against a medusa server. the process of extracting this logic out of medusa server and and into a plugin looks straight forward though as extending a medusa server and plugin have the same directory structure. Keeping an eye on the following github issues:
Resources for learning about the maths behind animations:
Creative Coding for complete Beginners - (feel free to skip a few of the first
videos)
Hack Physics and JavaScript (part 2) :: solving triangles = profit by Rachel Smith on
scss dithering effect codepen ala Arc Browser
https://www.noiseandgradient.com/?chaos=0.65&colors=%23ed625d-%2342b6c6-%23f79f88&grain=0.35
How I think this Printful - Medusa integration is going to work
Backend
Sync Printful store with Medusa
Continuously keeping Medusa in sync with Printful may require setting up webhooks on Printful for when items are deleted, updated etc
Storefront
Render all products - https://docs.medusajs.com/references/js-client/classes/ProductsResource
Use Medusa.js for carts/checkout - https://docs.medusajs.com/references/js-client/classes/CartsResource a. I would need a wire up the printful tax calculator around here too - https://developers.printful.com/docs/#tag/Tax-Rate-API
Regarding the payment and fulfilment with Printful. I’m unsure of the correct flow for this. In my head this how I think it should work.
order.payment_captured
event, I could fire off a Printful order via their API with details captured in Medusa- https://developers.printful.com/docs/#tag/Orders-APITesting out Medusa for shiba swag store
There are 3 components to a Medusa store:
create-medusa-app is a handy tool to scaffold up all three components quickly.
Looks like there is a Printful integration already - https://github.com/olivermrbl/medusa-printful. EDIT, doesn’t work…
Looking into creating my own medusa plugin to integrate with the medusa platform, this could be a powerful workflow…
Conclusions after research today. Medusa provides utilities for aspects of an eCommerce store. the bits I’m most interested in is :
The main challenges come around syncing back and forth between Printful and Medusa.
sweet font pairings - https://www.fontshare.com/pairs
slick SVG animations - https://rive.app/
Image
Video
3D
Code
Stable diffusion experiments today!
first off playing around with Diffusion Bee - Stable Diffusion GUI App for M1 Mac . A one-click install solution for stable diffusion.
Hosted Stable Diffusion - https://beta.dreamstudio.ai/dream
Time to explore if my Steam deck can run a local version of stable version with faster speeds than my M1 mac book…
The Changelog – Episode #506 # Stable Diffusion breaks the internet
Create a Next.js App in the Current Directory:
npx create-next-app .
lets try to learn Docker again ! https://www.youtube.com/watch?v=zJ6WbK9zFpI
containers vs VMs
image vs containers
Dockerfiles are used to create docker images
(stoppped at https://www.youtube.com/watch?v=zJ6WbK9zFpI)
clientId
and clientSecret
?https://<SAML-JACKSON-URL/api/oauth/saml>
the SAML ACS?Learning about Oauth for work watching this - https://www.youtube.com/watch?v=CPbvxxslDTU
open id connect
Full workflow will look like the following:
Auth is authentication( who are you?) and authorisation (can you?).
Auth always happens server side (on a per request basis). never client side.
You client (e.g a React app) can only understand “who” the user is via server request where the server returns a cookie/token back to client.
cookies are usually best as we can use restrict javascript access to cookies
All subsequent requests to the sever will contain this cookie and the server can validate wether the client can or cannot access a resource.
Authorisation workflow usually need to be coupled with a db. Your User
schema in the database could have a Role
feild with either USER
or ADMIN
. Access to resources subsequent could simply be done by checking this value first.
react-map-gl is the way to go when working with mapbox in react. it handle the initialisation of mapbox all for you and lets you compose map box elements like attribute , points layers in an idomatic react way.
I cant get useMap
to work correctly.I thought it was used to target a map instance. but it always return null? Instead I’ve resorted to binding a ref to the Map
component which seems to work…
export const MapExample: React.FC<HomePageMapProps> = ({ clientLocation }) => {
const { current: currMap } = useMap();
useEffect(() => {
if (mapRef.current && clientLocation) {
mapRef.current.flyTo({
center: [clientLocation.long, clientLocation.lat],
speed: 0.8,
zoom: 10,
});
}
}, [clientLocation]);
return (
<Map
ref={mapRef}
initialViewState={{
zoom: 1,
}}
mapStyle="mapbox://styles/mapbox/streets-v11"
mapboxAccessToken={MAPBOX_TOKEN}
attributionControl={true}
projection="globe"
>
{clientLocation && (
<Marker
longitude={clientLocation.long}
latitude={clientLocation.lat}
anchor="bottom"
>
</Marker>
)}
</Map>
);
};
Messing around with geolocation
and permissions
web API for supa meet.
Both the APIs can work hand in hand. You might want to check the state of permissions API for geolocation up front so you have control of the UX of when the GPS browser prompt happens.
navigator.permissions.query({ name: 'geolocation' }).then((result) => {
if (result.state === 'granted') {
enableGeolocation();
//
} if (result.state === 'prompt') {
showButtonToEnablesGeolocation();
} if (result.state === 'denied'){
// fallback behaviour
}
// Don't do anything if the permission was denied.
});
https://developer.mozilla.org/en-US/docs/Web/API/Permissions/query
Once permissions are granted we can confidently use the geolocation API which may prompt the user with a browser notification without it being too intrusive.
Geolocation accepts three arguments. A success callback, an error callback and an options object.
getCurrentPosition(success, error, options)
context
and useState
. It makes sharing this state across components really seamless.
supabase db remote changes
takes forever!?settling on this
import "zx/globals";
require("dotenv").config({ path: ".env.local" });
const SB_PROJECT_URL = process.env.NEXT_PUBLIC_SUPABASE_URL;
const SB_ANON_KEY = process.env.NEXT_PUBLIC_SUPABASE_ANON_KEY;
await $`npx openapi-typescript ${SB_PROJECT_URL}/rest/v1/?apikey=${SB_ANON_KEY} --output types/supabase.ts`;
&> /dev/null
- https://stackoverflow.com/a/18063720When in TRPC land handle errors with trpc.TRPCError
and dont return an error object like in GQL
bind:this={domElement}
use:enhance
. (interesting others) found it confusing too.https://usehooks-ts.com/ 2k ⭐️
+page.js
is a new convention that indicate there is some additional server logic that needs to be run before the page is rendered. Is this the equivalent of getServerSideProps
?
+layout.js
mean each layout can also have it own server side logicMailgun automatically put me on the foundation plan when my trial ran out. Fortunately after a q support ticket they refunded me and put me on the on documented “Flex” plan which give me 1000 emails free a month. this is not as competitive as SendGrid which allow for 100 emails a day forever…
mailer lite might be another email service to check out.
Lots of Postgres things that I want to learn/ play around with
- supabase cli is lookiing nice! - support for migrations - db branches with syntax similar to git
supabase db branch new my_branch
supabase db branch switch my_branch
#app-idea obsidian as a admin blog editor for a static blog site
#blog-idea lessons for working at digital web agency
Managed to chip away at shiba swag store a little bit last night. Back on the pomodoros for little bust of productivity. Seems to work great for working on side projects after work when I’m mentally drained.
Thinking of creating a chrome extension that can puts you into a focus mode with a pomodoro timer along with lofi girl running in the background. For the duration of the timer it blocks twitter and other social medias.
Reading through the next.js layouts RFC pt 1 .My current mental model:
- There will be a new /app
directory which can used to incrementally port over component pages from /pages
.
- at the root of the app
we can define a layouts.tsx
file which will nest all components. This replaces _app.tsx
and _documents.tsx
. getStacticProps
and getServerSideProps
works here which open up the ability for fetching global data across your website without needing to run the data request on every page.
- Folder based routing still existing, but index.tsx
will now need to be named page.tsx
. This means we can colocated test files and other things without needing the page extensions API. You can also provide a layout.tsx
in each nested route which will be nested in its parents layout.
- There appears to be no API to override the layout right now?
- all components will be react server components by default. This opens up the ability to run getServerSideProps
and getStaticProps
on component!? (need to verify this)
has
prop that can override the element of the element eg:
const StyledH1 = styled.h1``
() => <StyledH1 as={"h2"}> Turn me into a h2 </StyledH1>
Array(4).fill(['a','b']).flat()
// (8) ['a', 'b', 'a', 'b', 'a', 'b', 'a', 'b']
const items = Array(100)
.fill(<div>hello</div>)
I really want to see if Remix + fly.io can do everything I want with shiba swag store.
Re-reading this article about aspect ratio
object-fit
is probably what you want with content like blogsrelated: 2021-06-06 2022-01-23
esbuild-register
is a utility that can transpile and run .ts
files. Once installed using it can called like :node --require esbuild-register prisma/seed.ts
Form
feels like black magic but in action works really well. to have the data instantly available in another function which is automagically the backend streamlines things massively.Revsiting CSS grid again
justify-items
- will justify an individual grid child child to either start
, end
or end
, by default it is stretch
align-items
can move an grid child on the y axis with same properties , but also supports center
aligning the entire grid container
justify-content
-same properties , but also support space-around
, space-evenly
and space-between
align-content
- same propertieschild css grid element properties
align-self
justify-self
place-self
- combines both above 1st arg is vertical a.k.a align
second arg is horizontal a.k.a justify
Deno deploy + fresh can be deployed and distributed to world CDNs in less that 10 seconds. This due to fresh having 0 build time and JIT compilation of the framework when requested at the edge
We’re moving to post linux era for deployment. Deno deploy is built of V8 isolates. Which is also the reason why cold starts are so fast. more on V8 isolates here
Fresh is edge native framework. this mean it was built to work directly with deno deploy and v8 isolates.
REMEMBER 👏 TO 👏 TAKE 👏 BREAKS
Watching more Theo stuff on youtube . His breakdown of the lifecycle of a next.js render flow is super interesting :
any
response.Committing to using zustand for global state management for shiba swag store. decided it’s probably an anti-pattern to share around hooks in global state which is what I was doing with react context providers… When using zustand keeping things simple with basic primitives seems to be the way
useViewportScroll
hook, and flipping it 90 degrees. mega hack!animate any svg for free https://svgartista.net
-Scroll warp effect with scrollex, framer motion and css clip path
use .gitkeep
to persist a folder into source control
Whenever you set boolean flags like:
— David K. 🎹 (@DavidKPiano) June 15, 2022
🏳️ setIsLoading(true)
🏳️ setIsActive(true)
🏳️ setHasFailed(true)
You're not avoiding a state machine. You're creating an implicit one, where the transition function lives in your head, and is hopefully the same as your teammates (unlikely).
Two very light weight React state managment libs by pmndrs:
When to use which summary - https://github.com/pmndrs/jotai/issues/13
Going mad trying to assert non nulls with typescript as Array.filter
is not good enough.
Reading about predicate, but still struggled a lot.
function isNonNull<T>(val: T | null | undefined): val is T {
return val !== null && val !== undefined;
}
const cleanupNullsinArray = cleanupNulls.filter(isNonNull);
Exploring Commerce Layer for shiba swag store.
Overall I think commerce layer may not be required with for shiba swag store as using printful with dropshipping handles most of the commerce requirements.
GQL codegen was terrible idea due to fact the TS interface it generates is optional for everything, meaning defensive code for everything. Also the respoinse we;re getting back doesnt match the query given. so it’s one big lie?!
if we’re getting the fill payload back regardless of using groq codgen over GQL codgen. groq codegen is superior as at least it honours required feilds.
Settled with GQL codgen with Sanity. it means extra steps to deploy a GQL endpoint before running a codgen script
The sanity cli handles everything on the gql with a simple command sanity graphql deploy
:target
which targets then a url query param appears which is updated by a HTTP header via a netlify functionNew project at work might give me an excuse to finally tinker around with NX and storybook.
First stop, Nx docs!
This was a nice intro overview to Nx
need check out this tutorial - https://nx.dev/core-tutorial/01-create-blog
tldr, heres the code
let crypto;
try {
crypto = require('node:crypto');
} catch (err) {
console.log('crypto support is disabled!');
}
const apikey = "this-would-be-a-hash-provided-by-mailgun"
// this would be available in the event.body of the webhook response
const signature = {
token: 'jkashdjkashjkdhaksjhdashk',
timestamp: '1653941793',
signature: 'hjdkashdjkashdaskh
};
const key = signature.timestamp + signature.token
const hash = crypto.createHmac('sha256', apikey)
.update(key)
.digest('hex');
console.log(hash) // this should match signature.signature
You get SEO penalties for using Netlify preview branches as we now have duplicated content on different URLs .
The solution for this is to use _redirects
to point all .netlify domains to a your live URL
rel="canonical"
on all pages
https://www.codesections.com/blog/netlify/Another way might be to use snippet injection which has access to build time netlify environment variable - https://docs.netlify.com/site-deploys/post-processing/snippet-injection/
Seems netlify do restrict indexing of some of your deploys - https://docs.netlify.com/site-deploys/overview/#search-engine-indexing
animated SVG strokes - https://svgartista.net/
re-exploring JS-less modals using :target
selector, this codepen still proving to be useful
The
:target
CSS pseudo-class represents a unique element (the target element) with anid
matching the URL’s fragment.
Interestingly it will also work with query params too eg exammple.com/#?test=test
Remix vs Next.js - https://www.brenelz.com/posts/next-vs-remix
https://css-tricks.com/linearly-scale-font-size-with-css-clamp-based-on-the-viewport/
new useEffect
docs https://beta-reactjs-org-git-effects-fbopensource.vercel.app/learn/synchronizing-with-effects
[]
) corresponds to the component “mounting”, i.e. being added to the screen.Got mailgun configured to send emails from my chiubaca.com domain.
TXT
,MX
and CNAME
records to my domain DNS which was not on google domains but on Netlify as the namespaces had been forwarded to Netlify. #note-to-selfWhen sending emails via EU region on Mailgun , make sure to include the url
property when setting up the mailgun client
const API_KEY = "xxxxxxxxXxxxxxxxxxxxxxxxxxxx-xxxxxxx-xxxxxx";
const DOMAIN = "mydomaim.com";
const formData = require('form-data');
const Mailgun = require('mailgun.js');
const mailgun = new Mailgun(formData);
const client = mailgun.client({username: 'api', key: API_KEY, url:"https://api.eu.mailgun.net"});
// console.log(client)
const messageData = {
from: 'Yoopster <[email protected]>',
to: '[email protected]',
subject: 'Hello',
text: 'Testing some Mailgun awesomeness!'
};
client.messages.create(DOMAIN, messageData)
.then((res) => {
console.log(res);
})
.catch((err) => {
console.error(err);
});
- https://stackoverflow.com/a/71574225/7207193
Working on my design skill and playing around with Figma some more. Design feels so much harder than coding IMO. leaning heavily on Dribbble, and other people font pairing
https://www.youtube.com/watch?v=m8R9vvsbU4I
<dialog>
is the new html to build modal and dialogs easier without falling into a11y pitfalls
Native support for TS in netlify functions landed last year and I missed it ! https://www.netlify.com/blog/2021/04/19/announcing-native-typescript-support-for-netlify-functions/
Location
header e.gLocation: "/success"
if you want to hand roll your own google places auto completed form use AutocompleteService()
. Other wise if you’re happy for prebuilt google widget Autocomplete
is probably good enough.
address_components
https://developers.google.com/maps/documentation/javascript/examples/places-autocomplete-addressformFantastic video for some advanced TS tips and tricks https://youtu.be/hBk4nV7q6-w
Trying to integrate Google Places API into shiba swag store. Interesting to see there is not an isomorphic npm package for google maps sdk. For the browser you have to use a script tag similar to Paypal and Stripe
Google cloud platform has a horrible onboarding as they make enabling billing mandatory.
PayPalButtons
components have 1-to-1 mapping to to the paypal-js package so far, except I’ve had lots of problems trying to get the onShippingChange
to patch the PayPal order in order to cross check Printful for shipping estimations.
onShippingChange
callback.process.env
in next.js appsNote: In order to keep server-only secrets safe, Next.js replaces
process.env.*
with the correct values at build time. This means thatprocess.env
is not a standard JavaScript object, so you’re not able to use object destructuring. Environment variables must be referenced as e.g.process.env.PUBLISHABLE_KEY
, notconst { PUBLISHABLE_KEY } = process.env
.
Lazy day in the resort, working a little bit on shiba swag store. doing import stuff like setting up eslint for import ordering sarcasm . But finally have some down time revisit the eslint docs for import/order
I want to have react and next libs grouped at the top, node modules , internal modules, then relative files in spaced groups. I’ve kinda got it working with the following eslint config:
"rules": {
"@typescript-eslint/no-unused-vars": "error",
"@typescript-eslint/no-explicit-any": "warn",
"@typescript-eslint/ban-ts-comment": "warn",
"import/no-cycle": [1, { "maxDepth": 1 }],
"import/order":[
"error",
{
"groups": [ "external", "builtin" , "internal", "sibling","index" , "type" ],
"pathGroups":[{
"pattern": "{react,next,next/**}",
"position": "before",
"group": "external"
}
],
"pathGroupsExcludedImportTypes": [],
"newlines-between": "always"
}
]
}
minimap expression
. not something I’ve heard of before , but this minimap cheatsheet was useful.Really enjoying with Chakra UI so far, it’s provide low level utility prop similar to utility classes in tailwind. It makes a lot of sense and is very intuitive.
shiba swag store progress. slowing building up the shopping cart UI, it’s not been to bad so far.
Reading a little on Stripe vs PayPal
Settling with PayPal for now. Had a bit of success setting it up in next.js.
simply include the js lib via a next.js Script
component.
docs are pretty good https://developer.paypal.com/sdk/js/reference/
There’s a react wrapper - https://github.com/paypal/react-paypal-js
storybook too - https://paypal.github.io/react-paypal-js/?path=/story/example-paypalbuttons—default
Shared some advice with a developer on a bootcamp on staying motivated:
max()
https://developer.mozilla.org/en-US/docs/Web/CSS/max#setting_a_minimum_size_for_a_fontThinking a bit about the stack architecture for building-a-headless-dropshipping-site …
pending
, failed
, paid
starting to see the value of something like snipcart. not only does it handle the frontend cart UI, but it also acts a store transactions backend…
Wow supabase just launched some amazing new features!
I’m probably most exited about GraphQL. Even tough the supabase.js client is really awesome, it still does not have the full type safety which you can get from combining it with codegen. Will be drilling into this example to explore this more - https://github.com/supabase-community/supabase-graphql-example .
snipcart charges 2% of every transaction but:
*For clients with monthly sales under $ 629 CAD, the 2% will be replaced by a $ 13 CAD monthly fee. Note that you’ll be charged in USD.
shopify lite seems to be the only other interesting competitor but still not doing exactly what snipcart do with regards to a “headless” shopping cart which also integrates with other services.
struggling to really understand why we have these middle-men aside from the fact they handle tricky parts like VAT and linking up payment gateways. snipcart also has some other added benifits like analytics and empty shopping cart strategies…
This is an awesome repo demoing how to integrate Printful ,snipcart and a react frontend. Going to dissect this apart and try to build my own version.
down the rabbit hole I go… VAT and Printful😪
<AnimatePresence/>
it will handle transitioning between it unmounting state. So just think about a components initial state, its animating state and it’s exit state.Senior developers are just regular developers that have “seen-your” code a hundred times before.
IMPORTXML
is one of the best data crawling hacks Ive seen.
https://sheetsformarketers.com/google-sheets-formulas/importxml/null
or undefined
, but you know with confidence that the data should always be ok. If its not you accept your programme will blow up. This can help save many lines of of type guarding logic.mix-blend-mode: exclusion
can help you blend image with background image in interesting waysclassName
to styled component so you can extend an existing styled component. object oriented css baby!lodash/uniqwith
and lodash.isequal
to dedupe an array of objects.import uniqWith from 'lodash/uniqwith';
import isEqual from 'lodash.isequal';
const arrayOfObjects = [
{
id: 1
content: 'a'
},
{
id: 1
content: 'a'
},
{
id: 2,
content: 'b'
}
]
return uniqWith(arrayOfObjects , isEqual)
https://lodash.com/docs#uniqWith
#typescript fun
export type AsyncFuntionType = Awaited<ReturnType<typeof yourAsyncFunction>>
if it returns an array, simply get the first time from it
export type AsyncFuntionType = Awaited<ReturnType<typeof yourAsyncFunction>>[0];
jsx-a11y/anchor-is-valid
in next.js projects. Add to to your .eslint
"jsx-a11y/anchor-is-valid": [ "error", {
"components": [ "Link" ],
"specialLink": [ "hrefLeft", "hrefRight" ],
"aspects": [ "invalidHref", "preferButton" ]
}]
source : https://github.com/jsx-eslint/eslint-plugin-jsx-a11y/issues/402
basic shopify store setup pretty quickly. Realised its $24 a month to run the store. I can understand why they charge this as it was literally so easy to set up a tshirt store with integration to printful. It was all set up in under 30 mins .
Going to wire up the integration for with printful and code the UI from scratch to save a lot of money!!
Making a start finally understanding the Shopify platform…
high level overview, for technical https://www.youtube.com/watch?v=gAkWsGlofyU&t . There is a wysiwyg website builder, Shopify payments, bloggins systems. It makes me thinking of wordpress for ecommerce.
questions:
let pagesToSkip = CUURENT_PAGE_NUMBER === 1 ?
0
:
CUURENT_PAGE_NUMBER * ARTICLES_PER_PAGE - ARTICLES_PER_PAGE,
next/router
. There are listeners to track both of these eventstinkering around with framer motion some more to try and achieve a page trasistion effect - https://stackblitz.com/edit/nextjs-tfmzae?file=src%2Fcomponents%2FNav.tsx
Been tinkering around with the basics of framer-motion
once we motion from framer like so import {motion} from "framer-motion"
, we can simply use motion
components like so:
<motion.div
animate={{
x: 0,
y: 0,
scale: 1,
rotate: 0,
}}
/>
The animate
prop will animate css properties of that current div with the animate object
There are also a bunch of animation helpers or what framer calls “multiple gesture animation props” e.g whileHover
, whileTap
, whileFocus
, whileDrag
and whileInView
. These lets us run animations when these gestures are active.
Variants, lets us reuse animations
const variants = {
visible: { opacity: 1 },
hidden: { opacity: 0 },
}
<motion.div variants={variants} />
<motion.div
initial="hidden"
animate="visible"
variants={variants}
/>
npm scripts:
Use&&
(double ampersand) for sequential execution.
Use &
(single ampersand) for parallel execution.
https://stackoverflow.com/a/39172660
css gradient borders with gradient text is surprisingly hard!
Leveraging react context for global state to store cms data like strings and urls. This way I can inject these strings directly into components without passing props all over the place.
the cms context :
interface CmsData {
homePageContent?: HomePageContent;
aboutPageContent?: AboutPageContent
}
type ContextInterface = [
cmsData: CmsData | undefined,
setMessages: React.Dispatch<React.SetStateAction<CmsData | undefined>>
];
const CmsContext = React.createContext({} as ContextInterface);
export const CmsProvider: React.FC = ({ children }) => {
const [cmsContent, setCmsContent] = useState<CmsData | undefined>(undefined);
return (
<CmsContext.Provider value={[cmsContent, setCmsContent]}>
{children}
</CmsContext.Provider>
);
};
using useContext
we can make hook to set and get the cms data:
export function useCmsContent(content?: CmsData) {
const [cmsContent, setMessagesState] = useContext(CmsContext);
useEffect(() => {
if (!content) {
return;
}
setMessagesState(content);
});
return cmsContent;
}
with this in place, can set data on first page load…
const Home: React.FC<HomeProps> = ({ homePageContent }) => {
// homePageContent is data from next getStaticProps
useCmsContent({ homePageContent });
return (
<>
{/*...truncated for brevity... */}
</>
)
}
then in any children components we can use the data easily without having messy props all over the place:
const DeeplyNestedComponent: React.FC = () => {
const cmsData = useCmsContent();
return (
<>
{cmsData.homepageData.someText}
</>
)
}
env.local
with GQL CodeGen`"codegen": "DOTENV_CONFIG_PATH=./.env.local graphql-codegen -r dotenv/config --config codegen.yml",`
alt=''
is fine
The difference between flex-end
and end
- https://stackoverflow.com/questions/54654050/difference-between-flex-end-and-end
When using display:flex
with justify-content
always use justify-content:flex-end
. Safari cant infer that the container is a flex container when using justify-content:end
Starting to implement i18n for a work project and referring back to https://graphcms.com/blog/internatonalization-with-nextjs-and-graphcms
Leaning a lot on luxon for date and time stuff recently
pointer-events: none;
. Useful when there is an overlay positioned absolute over your content which you still want the things underneath to be clickable.Not strictly zero days the last couple of day. Been working like crazy for work for a tight deadline! I’ve learnt so many things , mainly niche safari things
gap
webkit-box
is not standard but fixes things?!Also just accelerated my mentoring, TS and React skills recently.
-webkit-box-align: start;
Does our styled components package need to be updated?!Add ellipsis to truncate long text after x amount of lines:
overflow: hidden;
text-overflow: ellipsis;
display: -webkit-box;
-webkit-line-clamp: 2;
-webkit-box-orient: vertical;
Something I’m still waiting for is a web gis app that is as slick to use as Figma. I wonder if this could be a possibility with a combination of WASM + GeoRust + JSTS + Turf.js
you dont need to use the browsers geolocation api to get your users rough location. just do a client side query with one of these free services:
brave might block this request though..
/* select all children except first child */
.select-children:not(:first-child) {
}
FormData
. It converts form data into a js object. const handleSubmit = async (event: any) => {
event.preventDefault();
const form = new FormData(event.target);
const formData = Object.fromEntries(form.entries());
await fetch('/api/contact', {
body: JSON.stringify(formData),
headers: {
'Content-Type': 'application/json',
},
method: 'POST',
});
// const result = await res.json();
};
querySelector
, but GSAP works perfect with React refs.gsap
and ScrollTrigger
, then finally register the plugin. I think registration only needs to happen once, but it’s ok to register multiple times and it will treat it as a singleton.import gsap from 'gsap';
import { ScrollTrigger } from 'gsap/dist/ScrollTrigger';
gsap.registerPlugin(ScrollTrigger);
import {} from 'styled-components';
import theme from '../theme';
declare module 'styled-components' {
type Theme = typeof theme
export interface DefaultTheme extends Theme {}
}
Had a quick look at GraphCMS and it’s shockingly similar to PrismicCMS. It’s much more GQL orientated which is fine as I would use Prismic with GQL too.
Like Prismic, they also provide a rich text rendered component with pretty much the exact same API - https://graphcms.com/blog/graphcms-react-rich-text-renderer
Managed to successfully wire up PrismicCMS with Next.js as static blog site. Love how Astro.js copied how staticProps
and staticPaths
. This made it so much easier to figure out in Next.js-land.
PrismicCMS returns rich text in a really verbose JSON format which needs to be serialized into HTML, it could be done by hand but luckily there is a hand React component that handles it all for you : https://prismic.io/docs/technologies/rendering-the-rich-text-and-title-field-reactjs#output-as-a-component
Tinkering with PrismicCMS today. Prismic builder seems interesting so far. From what I understand you can created resuable UI ‘slices’ which content authors of prismic can choose to add in themselves. These are not reusable components which we use in development.
PrismicCMS provide so many API clients…
Reading up on Next.js getStaticProps
and getStaticPaths
. Nice tutorial https://spacejelly.dev/posts/how-to-create-pages-in-next-js-with-static-dynamic-data/#step-2-using-getstaticprops-to-statically-generate-a-page-with-dynamic-data
Not quite figured how to infer the types when consuming these static props in the component… Interesting suggestions here but nothing I like, https://stackoverflow.com/questions/65078245/how-to-make-next-js-getstaticprops-work-with-typescript
pre-commit
hook to run a lint on files before committing to a repo. But this will lint all your files in your project, even the ones that you are not committing potentially blocking your work which can be pretty annoyinglint-staged
instead.onClick
handler to append a class to an element the following happens:
so how can solve this?
Can we either run the animation i.e add the class after everything has finishing re-rendering?
i dont think so…
Or can we ensure the animation has time to run before the whole page re-renders.
next-page-transitions, solves this.
it solves the problem of making sure only one page component is mounted at a time and that the next page isn’t mounted until the previous one has completed its exit animation.
The timeout
prop is particularly useful. It lets you delay how long it takes for the next page to render.
import/order
is an eslint plugin that organises your imports.
groups array
::before
pseudo element, this gives you a free div
element to play with to create a shutter effect by transitioning the transform:translateY(0)
to transform:translateY(0)
.transition: all 1s
to both the hover state and regular state the animation transition both on hover and out. thanks css tricks! - https://css-tricks.com/different-transitions-for-hover-on-hover-off/audio
is amazing and has made syncing up state trivial. Store are are just updateable objects no need to think about the flux pattern which i’m not sure is a good or bad thing yet.Starting to get to grips with the ergonomics of svelte. Using custom template syntax feels odd after using jsx so much, but the syntax is intuitive enough.
custom use
action directives are powerful - https://svelte.dev/docs#template-syntax-element-directives-use-action
Should any elements that are fed into a third party lib use
use
directives?
found a nice svg plugin for svelte - https://github.com/poppa/sveltekit-svg
Good tip on how to add additional custom aliases in svelte-kit by default only $lib/
works out the box - https://dev.to/danawoodman/how-to-add-module-import-aliases-in-sveltekit-2ck
I think this is how we add multiple tracks to an audio
instance - https://developer.mozilla.org/en-US/docs/Web/API/AudioTrackList
fe...
fitler are powerful!! https://developer.mozilla.org/en-US/docs/Web/SVG/Element/feCompositeThinking a little bit about my work presentation for the new year [[permanent-notes/making-eco-friendly-websites]].
The Preact compat layer lets you swap out React layer for Preact for existing React codebases. My understanding so far is that it provides the exact same API without any compromises. We did this for Penso and there was not issues and we’re using the usual react features such as context, various hooks like useState
,useEffect
third party libs like styled components.
Worth noting that the preact-compat
layer is an additional 2kb overhead. But preact is so small already at 3kb that 5kb in total is not so bad compared to reacts ~120kb
I like this summary why less code is better from https://robertknight.me.uk/posts/preact-react-tradeoffs/
Having less code makes everything which processes that code
run or start faster. For developers this includes build times, test
execution times, continuous integration cycle times and how long
it takes the application to load in development. Fast cycle times
make for happier and more productive developers. For end users,
it affects how long it takes the application to load in their browser,
particularly on lower-end devices and slower networks.
I cant find any tradeoffs with using preact over react which does make me question why we dont just use preact for everything?!
the preact-compat
layer provides 100% api compatibility with react.
--background-gradient: linear-gradient(to right, #24243e, #302b63, #0f0c29);
div{
background : var( --background-gradient )
}
/*
* _colours.scss
*/
// These are resuable varible only to be consumed in this file.. becasue there is repitition between the data properties and
:root{
--dark-bg: #161616;
--dark-text: #dbdbdb;
--dark-code: #f5f5f5;
--dark-code-block: #6e6c6c;
--light-bg: white;
--light-text: #093050;
--light-code: #740101;
--light-code-block: #d1d1d1;
}
@media (prefers-color-scheme: dark) {
:root {
--background: var(--dark-bg);
--primary-text: var(--dark-text);
--secondary-text: var(--dark-text);
--links: var(--dark-text);
--code: var(--dark-code);
--code-block: var(--dark-code-block);
}
}
@media (prefers-color-scheme: light) {
:root {
--background: var(--light-bg);
--primary-text: var(--light-text);
--secondary-text: var(--light-text);
--links: var(--light-text);
--code: var(--light-code);
--code-block: var(--light-code-block);
}
}
// Overide system preferences
html[data-theme="light"] {
--background: var(--light-bg);
--primary-text: var(--light-text);
--secondary-text: var(--light-text);
--links: var(--light-text);
--code: var(--light-code);
--code-block: var(--light-code-block);
}
html[data-theme="dark"] {
--background: var(--dark-bg);
--primary-text: var(--dark-text);
--secondary-text: var(--dark-text);
--links: var(--dark-text);
--code: var(--dark-code);
--code-block: var(--dark-code-block);
}
Digest more, consume less
THREE.MeshPhongMaterial
animation-fill-mode
is a css property that can control when css properties are applied during a css animation
backwards
applies the css property on the first animation keyframeforwards
applies the css property on the last animation keyframeboth
The animation will follow the rules for both forwards and backwards, thus extending the animation properties in both directions.https://developer.mozilla.org/en-US/docs/Web/CSS/animation-fill-mode
viewbox
attribute must be includedv-bind
in vue<script>
let name = 'world';
</script>
<input bind:value={name}>
<h1>Hello {name}!</h1>
In the DOM, everything is a string. That’s unhelpful when you’re dealing with numeric inputs — type="number"
and type="range"
— as it means you have to remember to coerce input.value
before using it.
With bind:value
, Svelte takes care of it for you:
<input type=number bind:value={a} min=0 max=10>
<input type=range bind:value={a} min=0 max=10>
bind
works across pretty much everything you expect it to including textarea
input
select
and even on elements that support textContent
with contenteditable
. useful for when binding innerHTML
:
<div
contenteditable="true"
bind:innerHTML={html}
></div>
Svelte provides reactive bindings to audio
and video
elements
The complete set of bindings for <audio>
and <video>
is as follows — six readonly bindings…
duration
(readonly) — the total duration of the video, in secondsbuffered
(readonly) — an array of {start, end}
objectsseekable
(readonly) — dittoplayed
(readonly) — dittoseeking
(readonly) — booleanended
(readonly) — boolean…and five two-way bindings:
currentTime
— the current point in the video, in secondsplaybackRate
— how fast to play the video, where 1
is ‘normal’paused
— this one should be self-explanatoryvolume
— a value between 0 and 1muted
— a boolean value where true is mutedVideos additionally have readonly videoWidth
and videoHeight
bindings.
get a reference to a DOM elements using bind:this={canvas}
. This similar to refs
in React.
Note interacting with reference needs to be done after the component is mounted. Therefore, the logic all needs to happen within the onMount
lifecycle hook.
component bindings are kinda mind blowing. It provides an API to get access to a childs prop
I’m still trying to get my head around binding to component instances - https://svelte.dev/tutorial/component-this …
Lifecycle events are almost the same as vue.js
onMount
& onDestroy
- do something after component is mounted and when it is unmounted/destroyed
beforeUpdate
& afterUpdate
- Do something before or after the component is updated great example use case here https://svelte.dev/tutorial/update.
tick
- is a promise that can be called anytime to ensure that any pending updates have been completed before running something next. - https://svelte.dev/tutorial/tick
svelte stores is sveltes built-in global state management solution. Sort of like react context or Vuex, but it’s much easier to reason with!
writables
- is a mutable reactive object which has a set
, and update
methods
readables
- is a read only object. It is instantiated as a function , first arg is the inital value, the second is a callback which provides a set
and stop
functions. set
is called on the first subscription. stop
is called on the last unsubscribe
derived
- is a way to leverage the value from a different store - https://svelte.dev/tutorial/derived-stores
Nice way to keep store logic “clean” - https://svelte.dev/tutorial/custom-stores . A “custom” svelte store only needs to be an object that exposes a subscribe
method and it is automatically a svelte store.
bind
works on writable svelte stores - https://svelte.dev/tutorial/store-bindings
ctrl-a
- move the cursor to the beginning of the current linectrl-e
- move the cursor to the end of the current linealt-b
- move the cursor backwards one wordalt-f
- move the cursor forward one wordctrl-k
- delete from cursor to the end of the linectrl-u
- delete from cursor to the beginning of the linealt-d
- delete the word in front of the cursorctrl-w
- delete the word behind of the cursorWrote up swapping-react-for-preact-in-nextjs for work
Nice css tips and tricks here https://www.livescience.com/little-known-sleep-stage-may-be-creative-sweet-spot
patch-package
is a way to patch npm packages on the fly without having to wait for the maintainer..terminal-text {
font-size: clamp(.4rem, 1vw, .8rem);
}
ctrl
+shift
+ s
, will pull and push changes to github all withing obsidianonpush
github actions is configured to hit a redeploy netlify webhook which will redeploy notes.chiubaca.com with the latest notes.Supabase open sourced # pg_graphql . It’s a PostgreSQL extension that lets your query your PostgreSQL db with GQL! 🤯
Massive anxiety attack in the middle of the night. Read a couple of articles.
The difference between good stress and bad stress is the chance to return to equilibrium (homeostasis), which has beneficial effects in itself. This is the kind of anxiety i get in the middle of the night and the only way to recover is to get out of bed to chill out.
We live in our heads a lot of the time, and are happy that way…this means that we can become a bit disconnected from our bodies. 100% me in the last 4 years…
git clone
the notes and move the articles into right astro directory works surprisingly well!Graphql aliasing is potential way to get around rate-limiting on githubs gql api - https://gist.github.com/MichaelCurrin/6777b91e6374cdb5662b64b8249070ea#gistcomment-3911659
2 - install the server clinent a - connect it to the dashboard with like 1 or two commands
Thats it! Once it’s configred you can apply model validation rules via the server for server side validation. What’s neat is you have client-side validation which is all configurable via the dashboard.
Get file from a github repo via the GQL api - https://gist.github.com/MichaelCurrin/6777b91e6374cdb5662b64b8249070ea
Create pages dynamically with Astro.js with getStaticPaths()
. Note that if the params object includes any more data than necessary, the build fails!
dynamic pages in Astro.js works exactly the same as Next.js , wherby dynamic pages are route based are indicated with bracket notation e.g /posts/[page].astro
set default node version in nvm - https://reactgo.com/nvm-set-default-node-version/
yes! astro is working on prettier plugin - https://github.com/withastro/prettier-plugin-astro/tree/main/src
const fs = require('fs');
const DATA = require('../stats.json');
function getFileSize(file) {
const { size } = fs.statSync(`.next/${file}`);
console.log(`${file} is ${size} bytes`);
return size;
}
const fileSizes = DATA.chunks.map(element => getFileSize(element.files[0]));
const res = fileSizes.reduce((total, num) => {
return Number(total) + Number(num);
}, 0);
fs.writeFile(
'./public/size.json',
JSON.stringify({
size_in_bytes: res.toFixed(2),
size_in_kilobytes: (res * 0.001).toFixed(2),
}),
'utf8'
);
Pull
, the reactivity is happening via re-runs of functions so your mental model needs to be ‘how do we react, once this function re-runs’ Whereas in Vue, its Push
system. We mutate variables and the things automatically update, We’re not reacting to a change.package-lock.json
has a merge conflict - https://blog.adriaan.io/merge-conflict-in-package-lock-json.htmlIntersectionObserver
detects when a specfic element is interesecting within the viewport. It does not let you detect when one element is intersecting another element.const sectionOne = document.querySelector(".section1")
const options = {
root: null, // the view port
threshold: 0.5, // default 0 - a percent e.g 0.25 = 25% . how much of the element needs to be in view before the IntersectionObserver is triggered.
rootMargin: '0px' // The margin of the viewport. increasing this make the viewport more narrow. works like css eg. '10px 10px' .
};
const observer = new IntersectionObserver((entries, observer)=>{
entries.forEach((entry)=>{
console.log('entry: ', entry.target)
entry.target.classList.toggle('outline')
})
}, options)
observer.observe(sectionOne)
next/image
component is pretty awesome. I always knew it was clever but the more I drill into it, the better it getsAutomatically serves the images in modern image formats like WebP, which is about 30% smaller than JPEG, if the browser supports it.
Optimize images on demand. Build time won’t change.
Lazy-load the image inside the page only when a certain threshold is reached while scrolling the viewport.
You can decide to use it with Vercel’s internal CDN or other providers like Cloudinary or Akamai.
You can specify different image sizes for different and custom resolutions to be used dynamically.
Automatically changes the quality of the photo to a lower threshold set to 75%. This can be changed for each invocation.
https://betterprogramming.pub/how-to-use-next-js-10s-new-image-component-3b39dc4efe6f
Lighting is key
envinronment mapping on the background but also applying the envinronmet map back on 3D object mesh can give good results,. Very easily done with scene.environment = environmentMap;
There are a lot of renderer settings!
It’s a good idea to enable these all run them through dat.GUI.
gltf
which can either be a readable json format and/or a binary format.
import * as THREE from "three";
import { GLTFLoader } from "three/examples/jsm/loaders/GLTFLoader.js";
//...
const gltfLoader = new GLTFLoader();
gltfLoader.load(
"/models/Duck/glTF/Duck.gltf",
(gltf) => {
console.log("success");
console.log(gltf);
scene.add(gltf.scene.children[0]);
},
(progress) => {
console.log("progress");
console.log(progress);
},
(error) => {
console.log("error");
console.log(error);
}
);
// ....
Note to import draco gltf you need to use the DRACOLoader
loads of free gltf to use here - https://github.com/KhronosGroup/glTF-Sample-Models
gltf can be created to have animations built into which can be activated using the THREE.AnimationMixer
three.js 3d model tool - https://threejs.org/editor/
background: space no-repeat
will space repeat an image with spacing to pefectly fill up a given space so the image never gets cropped.
https://www.youtube.com/watch?v=IkVDgvnjCHohttps://twitter.com/slightlylate/status/1460310824715620355?t=esFsEtIVE5XVWdpsxYn-dA&s=19
By deault in Three.js, the camera perspective is “looking” directly down the z-axis
= “up” and “down” is the y
axis
x
axisz
axisIt’s usually a good idea to try and normalise event positions to go from 1
- 0
to -0.5
to 0.5
A mouse parallax effect can created easily by animating the position of a 3D object based on the position of the mouse. When combined with clamping of values above we can constrain the movement.
often a parralax technique can feel “mechanical” . “lerping” is an effect that that make the move lag slight to make it feel smoother.
The idea behind the formula is that, on each frame, instead of moving the camera straight to the target, we are going to move it (let’s say) a 10th closer to the destination. Then, on the next frame, another 10th closer. Then, on the next frame, another 10th closer.
cameraGroup.position.x += (parallaxX - cameraGroup.position.x) * 0.1
cameraGroup.position.y += (parallaxY - cameraGroup.position.y) * 0.1
e.g
/**
* Animate
*/
const clock = new THREE.Clock();
let previousTime = 0;
const tick = () => {
const elapsedTime = clock.getElapsedTime();
const deltaTime = elapsedTime - previousTime;
previousTime = elapsedTime;
for (const mesh of sectionMeshes) {
mesh.rotation.x += deltaTime * 0.1;
mesh.rotation.y += deltaTime * 0.12;
}
// Animate camera
camera.position.y = (-scrollY / sizes.height) * objectsDistance;
const parallaxX = cursor.x;
const parallaxY = cursor.y;
cameraGroup.position.x +=
(parallaxX - cameraGroup.position.x) * 5 * deltaTime;
cameraGroup.position.y +=
(parallaxY - cameraGroup.position.y) * 5 * deltaTime;
// Render
renderer.render(scene, camera);
// Call tick again on the next frame
window.requestAnimationFrame(tick);
};
tick();
vmin
- https://css-tricks.com/simple-little-use-case-vmin/THREE.MeshToonMaterial
is quite fun. It creates a cel-shaded cartoon effect. Its important thats it’s used along with a texture file.
magFilter
set to THREE.NearestFilter
which can be used for the gradientMap
for MeshToonMaterial
.// Texture
const textureLoader = new THREE.TextureLoader();
const gradientTexture = textureLoader.load("textures/gradients/5.jpg");
gradientTexture.magFilter = THREE.NearestFilter;
// Material
const material = new THREE.MeshToonMaterial({
color: parameters.materialColor,
gradientMap: gradientTexture,
});
was reading NFT misconception: JPEG aren’t on the Blockchain
{
"title": "Asset Metadata",
"properties": {
"name": {
"description": "Identifies the asset to which this NFT represents"
},
"description": {
"description": "Describes the asset to which this NFT represents"
},
"image": {
"description": "A URI pointing to a resource with mime type image/*
representing the asset to which this NFT represents. Consider making
any images at a width between 320 and 1080 pixels and aspect ratio
between 1.91:1 and 4:5 inclusive."
}
}
}
notice the
image
field, which is a link to an image. If we take example Project Memento, an NFT allowing change tiles you own to the desired letter. You basically trade “coordinates” on the website and there is NO NEED of an image for the project itself to work.
We could store them on the blockchain, but that would be excessively expensive
- The cost of data storage is 640k gas per kilobyte of data
- The current gas price is approximately 50 Gwei (or 0.000000050 ETH).
- At today’s price (30th October 2021), 1 ETH is approximately $4000
- Each kilobyte for an image would cost $40
Worth noting there are NFTs thare do store assets onchain like cryptopink and cryptokitties. They this did before fees went crazy
My personal take is that, it is the perceived intrinsic value of NFT that is make the opinions of them so polarising.
is web 3.0 DRM 2.0? - https://twitter.com/jshbz/status/1442611217000849408
( Ok so did end up finishing the threejs journey on raycaster )
Raycaster is a technique that can be used to pick up mouse clicks in a 3D space. it sort of works by shooting a direct line a.k.a vector.
set up an raycaster with THREE.Raycaster();
this little boilerplate let you hover over 3D object and “react’ to the event
/**
* Animate
*/
const clock = new THREE.Clock();
const tick = () => {
const elapsedTime = clock.getElapsedTime();
raycaster.setFromCamera(mouse, camera);
const objectsToTest = [object1, object2, object3];
const intersects = raycaster.intersectObjects(objectsToTest);
for (const intersect of intersects) {
intersect.object.material.color.set("#0000ff");
}
for (const object of objectsToTest) {
if (!intersects.find((intersect) => intersect.object === object)) {
object.material.color.set("#ff0000");
}
}
// Update controls
controls.update();
// Render
renderer.render(scene, camera);
// Call tick again on the next frame
window.requestAnimationFrame(tick);
};
tick();
onclick
events can also be intercepted here too but will require setting up another event listener.h1
on a pageh
tags to structure the order of your page like a text book. but not in regards to stylingh
tags should work downwards sequentiallyheader
and nav
tagssin
cos
and tan
again?ArrayBuffer
ArrayBuffer
directly. Instead you need to use a DataView
AudioContext
on safari we need to use webkitAudioContext
let AudioContext = window.AudioContext || (window as any).webkitAudioContext;
this.audioContext = new AudioContext();
<pre>
tags are used for preserving formatted text https://developer.mozilla.org/en-US/docs/Web/HTML/Element/preconst Crypto = require('crypto')
function randomString(size = 21) {
return Crypto
.randomBytes(size)
.toString('base64')
.slice(0, size)
}
console.log(randomString()) // '/VuPgyBlk/aZjPADhMqQk'
Design systems help maintain, govern and preserve brand standards. Colors, typography, horizontal & vertical spacing and rhythm, as well as numerous other design aspects that can be mutated over time through various iterations of design. If design aspects are changed consistently, there is a possibility of causing negative impact to a brand’s image, website or application and can appear disheveled. Additionally, the brand can lose credibility with these types of inconsistencies.
Tier 1: Typography & color styles, spacing rules
Tier 2: Atom level components using styling from Tier 1 which may include buttons, dropdowns and input fields
Tier 3: Combining elements of tier 2 components to create more complex patterns.
background-image
.grid {
width: 100%;
height: 100%;
background-image: repeating-linear-gradient(
90deg,
transparent calc(1px + var(--grid-blur)),
transparent var(--grid-size),
var(--grid-color, black)
calc(var(--grid-size) + 1px + var(--grid-blur, 0px))
),
repeating-linear-gradient(
180deg,
var(--grid-color, black) 6%,
transparent calc(1px + var(--grid-blur, 0px)),
transparent var(--grid-size),
var(--grid-color, black)
calc(var(--grid-size) + 1px + var(--grid-blur, 0px))
);
}
then transfrom it to make it look like a flat plane:
transform: perspective(10vh) rotateX(47deg) translateZ(41px);
audio
html element can take multiple sources:
<audio controls>
<source src="myAudio.mp3" type="audio/mpeg">
<source src="myAudio.ogg" type="audio/ogg">
<p>Your browser doesn't support HTML5 audio. Here is
a <a href="myAudio.mp3">link to the audio</a> instead.</p>
</audio>
git reset HEAD~ --hard
git reset HEAD~ --soft
will roll back the commits but leaving the changes staged so you can re-commit the change with some tweaks if necessary.git reset HEAD~{number} --hard
can be used if you want to rollback several commits in one goBeen busy, but not sure if I’ve been productive lately. Lots of just-in-time learning but not been going deep on anything. Reminder that going deep on topic will open up new rabbit holes and unrevel new sources of inspo.
Finally got round to finishing the particles lesson on three.js journey. Some things fundamentally don’t understand.
setAttribute
what are the attributes we’re setting? what does this look like? can it be anything we want?Float32Array
- a kinda magical primitive to me. it’s just an array of numbers right?Learning a lot about blending 3d objects together. The main takeaway is that three.js doesnt necessary know what should be on top of each other. This can be worked around with blending, and tinkering with alphaMap
, alphaTest
, transparent
and depthWrite
properties of your material.
Learning all about the limitation of email template development today. MJML is an interesting technology, but it’s still annoying not have the full capabilities off css to hand.
Ported my three.js audio visualiser logic to a react component and pleasantly surprised to see the class kinda just work when passnig HTML to it via react refs.
Got the web audio frequency data wired into the three.js shaders I was tinkering with to create a 3D music visualiser!
This is where the magic happens…
createVisualizer() {
this.audioContext = new AudioContext();
if (!this.audioElem) {
return;
}
this.audioElem.crossOrigin = "anonymous"; // Otherwise the browser moans
const src = this.audioContext.createMediaElementSource(this.audioElem);
const analyser = this.audioContext.createAnalyser();
src.connect(analyser);
analyser.connect(this.audioContext.destination);
analyser.fftSize = 128; // Bump to 256 to get small sample rate
const vis = document.getElementById("vis");
// Still dont understand this bit
const bufferLength = analyser.frequencyBinCount;
const dataArray = new Uint8Array(bufferLength);
let barHeight: any;
function renderFrame() {
requestAnimationFrame(renderFrame);
analyser.getByteFrequencyData(dataArray);
for (let i = 0; i < bufferLength; i++) {
barHeight = dataArray[i]; // This is our magic number that will fluctuate up and down for our data viz!
if (barHeight < 50) {
return;
}
vis!.style.height = `${barHeight + 100}px`;
vis!.style.width = `${barHeight + 100}px`;
}
}
renderFrame();
}
Carried on snooping around the shader source code at - https://codesandbox.io/s/orb-shader-7n8j7?file=%2Fsrc%2Findex.js
Forked the code and wired it up to dat.GUI
instead. https://stackblitz.com/edit/vite-wzp45r
You make a synthesizer with the web audio api! - https://www.youtube.com/watch?v=laCjGMhASp8
This was a great tutorial on making a audio player with vanilla js - https://www.youtube.com/watch?v=jZL9gVwxO-U
Going the down the rabbit hole of web audio.
BufferGeometry
.Fun building a 3D scene with just Three.js primatives e.g with just built in geometies like SphereGeometry
and BoxGeometry
. You can get pretty far.
Learnt about how to make high fidelity animations with theatrejs. First time i have seen a UI in conjunction with code to build animations. It actuall makes a lot of sense! It combines the best bits of dat.gui and GSAP to create an intuitive animation authoring experience.
Been playing with Astro some more and finally got my head around how to use npm modules client side in a .astro
file. It’s not that obvious…
First thing I tried was something like this:
<!-- Test.astro -->
<canvas class="webgl"></canvas>
<script type="module">
import * as THREE from 'three'
console.log(THREE) //undefined :(
</script>
This returns Uncaught TypeError: Failed to resolve module specifier "three". Relative references must start with either "/", "./", or "../".
in the console.
Astro doesnt let you import npm modules in inline script tags within .astro
unfortunatley. However we can import in an external .js
/.ts
file, then make use of Astro.resolve
like so:
<!-- Test.astro -->
<canvas class="webgl"></canvas>
<script src={Astro.resolve('./myScript.js')} type="module"/>
Inside myScript.js
we can import things as expected.
// myScript.js
import * as THREE from 'three';
console.log(THREE) // Three.js module!
Working demo here.
Some neat TS tricks in this article https://www.cstrnt.dev/blog/three-typescript-tricks
I dont use ReadOnly
enough to enforce i’m not mutating things,
Record
is a utility type to make typing objects easier.
In fact there are loads of utility types to use! - https://www.typescriptlang.org/docs/handbook/utility-types.html
Some more shadow fun in Three.js, there a bunch of shadow map algorithms that can be used:
THREE.BasicShadowMap Very performant but lousy quality
THREE.PCFShadowMap Less performant but smoother edges
THREE.PCFSoftShadowMap Less performant but even softer edges
THREE.VSMShadowMap Less performant, more constraints, can have unexpected results
Rule of thumb, try to avoid using dynamic shadows as it uses a lot of CPU. A good alternative is to use ‘baked shadows’.
A cool technique with baked shadows is add the a shadow texture to a mesh and place the mesh directly underneath a 3D object. You can adjust the opacity of the mesh to create different lighting levels.
why are fixed footers such a b%tch? Lets talk through the options…
absolute positioning sucks, becasue it means the footer potentially floats over some content at the bottom of the page
flexbox kinda works-ish. it means wrapping your content in extra divs and justifying vertical content with space-between
. But this feels clunky and I’m getting a weird effect when the main content is centered to page, where I want it stick to the top.
grid might be the best way forward. but it’s not supported everywhere such as in a react-pdf
context.
Learning about lighting in three.js
There’s a lot of different light sources available and positioning them will take experimentation. Make use of the THREE light helpers which visualise where the light source is coming from combined with dat.GUI
to quickly move the light sources around.
Lights sources come at a computational cost and should be used sparingly
Minimal cost:
AmbientLight
HemisphereLight
Moderate cost:
DirectionalLight
PointLight
High cost:
SpotLight
RectAreaLight
Consider Baking
A good technique for lighting is called baking. The idea is that you bake the light into the texture. This can be done in a 3D software. Unfortunately, you won’t be able to move the lights, because there are none and you’ll probably need a lot of textures.
Starting to learn about use-gesture. Great little tutorial on https://www.youtube.com/watch?v=dBWhAGhw7wM
use-gesture tries to make it easy to add new gesture primatives such onClick
and extend to lots more gestures like onDrag
and onPinch
so we can create much more native like experiences.
It works well with react with some out the box hooks. The boilerplate code is quite simple
function MyComponent(){
const myDomRef = useRef()
useGesture({
onDrag: doSomething(event)
domTarget: myDomRef
})
return <div ref={myDomRef}> hey </div>
}
webp
and avi
using the <picture>
element. As <picture>
allows browsers to skip images they do not recognize, you can include images in your order of preference. The browser selects the first one it supports.<picture>
<source srcset="img/photo.avif" type="image/avif">
<source srcset="img/photo.webp" type="image/webp">
<img src="img/photo.jpg" alt="Description" width="360" height="240">
</picture>
24873.631047229086
km.reactive declarations
with the $
syntax is pretty neat.
not only is it used to create computed values
:
$: doubled = count * 2;
but code can be reactively run if the dependency has changed too:
$: {
console.log(`the count is ${count}`);
alert(`I SAID THE COUNT IS ${count}`);
}
big gotcha with svelte is that array methods like pop
, shift
, unshift
and splice
don’t work as expected. Svelte needs a re-declaration to work
e.g
let numbers = [1, 2, 3, 4];
function addNumber() {
numbers.push(numbers.length + 1);
} // DOESNT UPDATE `numbers`
// INSTEAD
function addNumber() {
numbers = [...numbers, numbers.length + 1];
}
A simple rule of thumb: the name of the updated variable must appear on the left hand side of the assignment
props in svelte use the export
keyword. little bit weird…
like with Vue.js , it has a it’s own syntax for html conditionals. It’s not so adamant about providing an index when looping, but it is encouraged. the syntax is slightly different.
{#each things as thing (thing.id)}
<Thing name={thing.name}/>
{/each}
{#await promise}
<p>...waiting</p>
{:then number}
<p>The number is {number}</p>
{:catch error}
<p style="color: red">{error.message}</p>
{/await}
TBC - 6.Bindings
Three.js only supports typeface fonts
, its; possible to convert fonts with something like this - https://gero3.github.io/facetype.js/
fontLoader.load(
'/fonts/helvetiker_regular.typeface.json',
(font) =>
{
const textGeometry = new THREE.TextGeometry(
'Hello Three.js',
{
font: font,
size: 0.5,
height: 0.2,
curveSegments: 12,
bevelEnabled: true,
bevelThickness: 0.03,
bevelSize: 0.02,
bevelOffset: 0,
bevelSegments: 5
}
)
const textMaterial = new THREE.MeshBasicMaterial()
const text = new THREE.Mesh(textGeometry, textMaterial)
scene.add(text)
}
)
to animate and add other materials, we do it all inside the callback function.
materials
are the abstractions over creating shaders
to produce textures for 3D objectsThree.Mesh
is comprised of a geometry and a material. The material is supplied as the second argument to the Three.Mesh
constructor.const material = new THREE.MeshBasicMaterial();
const sphere = new THREE.Mesh(new THREE.SphereGeometry(0.5, 16, 16), material);
const textureLoader = new THREE.TextureLoader();
const doorColourTexture = textureLoader.load("/textures/door/color.jpg");
THREE.MeshBasicMaterial
constructor.const material = new THREE.MeshBasicMaterial({
map: doorColourTexture,
});
There’s a lot more params that can be provided to the MeshBasicMaterial
options object. like transparency
and wireframe
see docs for reference
Note, materials for THREE.PlaneGeometry
only get rendered on one side. this can be worked around with material.side = THREE.DoubleSide;
. But means double the amount of triangles need to be calculated .
THREE.MeshBasicMaterial
- lets you map your own textures
MeshNormalMaterial
- displays a nice purple, blueish, greenish color
MeshMatcapMaterial
picks colours relative the orientation of the camera, creating the illusion of a light source.(thus less resources required).
MeshDepthMaterial
- displays a hue of colour dependent on depth of the camera.
You can use this material for special effects where you need to know how far the pixel is from the camera.
If we want to use something like the THREE.MeshLambertMaterial
we need a real light source
const pointLight = new THREE.PointLight(0xffffff, 0.5)
pointLight.position.x = 2
pointLight.position.y = 3
pointLight.position.z = 4
scene.add(pointLight)
MeshToonMaterial
- give a cool cel shaded look!
MeshStandardMaterial
uses physically based rendering principles. it supports lights but with a more realistic algorithm and better parameters like roughness and “metalness”.
The aoMap
property (literally “ambient occlusion map”) will add shadows where the texture is dark. Useful for bring depth to textures.
To make this work we need to add another attribute to the geometry using setAttribute
, which duplicates the existing uv
attribute
sphere.geometry.setAttribute('uv2', new THREE.BufferAttribute(sphere.geometry.attributes.uv.array, 2))
Now we can control the intensity of the aoMap
like so:
material.aoMap = doorAmbientOcclusionTexture
material.aoMapIntensity = 1
displacementMap
property and then we can mess around with the displacementScale
material.displacementMap = doorHeightTexture;
// crank up to exergate the texture
material.displacementScale = 0.1;
envMap
or Environment maps can create nice mirror effect which map a surrounding texture into your mesh.
const cubeTextureLoader = new THREE.CubeTextureLoader()
const environmentMapTexture = cubeTextureLoader.load([
'/textures/environmentMaps/0/px.jpg',
'/textures/environmentMaps/0/nx.jpg',
'/textures/environmentMaps/0/py.jpg',
'/textures/environmentMaps/0/ny.jpg',
'/textures/environmentMaps/0/pz.jpg',
'/textures/environmentMaps/0/nz.jpg'
])
One of the best sources is HDRIHaven. To convert an HDRI to a cube map, you can use this online tool: https://matheowis.github.io/HDRI-to-CubeMap/
[[ stuff ]]
which is not part of the markdown syntax. Luckily this can be turned off, so backlinks use proper md link syntax.New Link format
setting is set to Relative path to file
. This produces urls which will work will <a ref>
tags..md
files so urls will looks something like ../fleeting-note/20210928.md
the .md
make the url invalid, so we need to chop it off somehow…
/fleeting-notes/20200910.md /fleeting-notes/20200910
_redirects
file..astro
files sometimes but closing and reopening the file seems to fix it for now.I’m seeing smart people in the web dev twitter bubble completely ridicule the web 3 and NFT space.
I like this quote a lot from Alex Russell, https://twitter.com/slightlylate/status/1442271971765411848
So, after many many billions of dollars invested, it’s fair to ask “what problems have been solved?” Then, we should weigh those up against the costs: explosion in ransomware, acceleration of money laundering by crime syndicates, climate change. Are we winning?
Astro
class and using it the frontmatter. All-in-all I’m excited to explore astro in bit more depth.frontmatter-file-batcher
🤵 💒 👰
const textureLoader = new THREE.TextureLoader();
const texture = textureLoader.load(
"http://192.168.1.106:8080/assets/images/c548d2d05b1ab054651d3e401b8e4ede.jpg"
);
MeshBasicMaterial
class.const material = new THREE.MeshBasicMaterial({ map: texture });
const textureLoader = new THREE.TextureLoader();
const texture = textureLoader.load(
"http://192.168.1.106:8080/assets/images/c548d2d05b1ab054651d3e401b8e4ede.jpg",
() => {
console.log("load");
},
() => {
console.log("progress");
},
() => {
console.log("error");
}
);
Oftenly used to create loading bar is created by utalising the THREE.LoadingManager
class. There are callback which can be accessed
const loadingManager = new THREE.LoadingManager();
const textureLoader = new THREE.TextureLoader(loadingManager);
loadingManager.onStart = () => {
console.log("Start");
};
loadingManager.onLoad = () => {
console.log("Load");
};
loadingManager.onError = () => {
console.log("Error");
};
UV coordinates is the mechanism for specifying how textures get mapped onto a geometry. Premade geometries by Three.js have set UV coordinates which can be access via the attributes
property on any Geometry classs. it will be a represented as a Float32Array
Textures can be remapped at runtime by tweaking the propertie on the texture object.
repeat.x/y
- repeat a texture x amount of time on the x or y axisoffeset.x/y
- offset a textureon either the x or y axisoffeset.wrapS
- this is needs to be set with Three.MirrorRepeatWrapping
offeset.wrapT
- this is needs to be set with Three.MirrorRepeatWrapping
rotation
rotation.center
- changes to pivot point when rotatingImage filtering can change the appearance of texture to make them more or less sharp
minFilter
has better performance but will stretch and blur your texturesmagFilter
will make your image looks sharp and remove any blurrynessThree.NearestFilter
texture resolutions must be by a power of two otherwise performance will suffer
jpg is smaller but lossy. png is lossless but heavy
places to get textures
params
object which you can reference when using the three.js api.// get height and width from the window object
const sizes = {
width: window.innerWidth,
height: window.innerHeight,
};
window.addEventListener("resize", (e) => {
sizes.width = window.innerWidth;
sizes.height = window.innerHeight;
//both the camera and renderer needs to notified and refreshed
camera.updateProjectionMatrix();
renderer.setSize(sizes.width, sizes.height);
// importantly the aspect of also camera also needs to be udpated
camera.aspect = sizes.width / sizes.height;
// important for retina screen or an screens that have a pixel ration higher than 2. No need to accomdate for anything higher than 2
renderer.setPixelRatio(Math.min(window.devicePixelRatio, 2));
});
once we have our mouse event poistions normalised to -0.5 - 0.5 , we need to remember that a negative y
position makes the camera go “up: therefore we need to invert the y axis
const cursor = {
x: 0,
y: 0,
};
window.addEventListener("mousemove", (e) => {
cursor.x = e.clientX / sizes.width - 0.5;
cursor.y = -(e.clientY / sizes.height - 0.5);
console.log("hey", cursor);
});
inside our tick function we can simply update the camera with the cursor values
const tick = () => {
//Update camera
camera.position.x = cursor.x * 10;
camera.position.y = cursor.y * 10;
// Render
renderer.render(scene, camera);
// Call tick again on the next frame
window.requestAnimationFrame(tick);
};
to keep the position of the item in the center we do this:
camera.position.x = cursor.x * 10;
camera.position.y = cursor.y * 10;
camera.lookAt(mesh.position);
using trigonometry we can make our object spin however many rotations we want. remember that Math.PI
will make an object spin 180 degrees.
camera.position.x = Math.sin(cursor.x * Math.PI * 2) * 3;
camera.position.z = Math.cos(cursor.x * Math.PI * 2) * 3;
camera.lookAt(mesh.position);
or just use the built in controls
import { OrbitControls } from "three/examples/jsm/controls/OrbitControls";
const control = new OrbitControls(camera, canvas);
There is also have smooth flow like control like this
control.enableDamping = true;
but also remember to update the your tick function with
control.update();
React.forwardRef
function you can expose a ref.
const FancyButton = React.forwardRef((props, ref) => (
<button ref={ref} className="FancyButton">
{props.children}
</button>
));
// You can now get a ref directly to the DOM button:
const ref = React.createRef();
<FancyButton ref={ref}>Click me!</FancyButton>;
ref
can passed around easily into any other componentconst sizes = {
width: 800,
height: 600,
};
const aspectRatio = sizes.width / sizes.height;
window.addEventListener("mousemove", (e) => {
cursor.x = e.clientX / sizes.width - 0.5;
cursor.y = e.clientY / sizes.height - 0.5;
console.log("hey", cursor);
});
0.5
Trying to chip away at lesson 7 of Three.js journey.
PerspectiveCamera
indicate near
and far
.
The third and fourth parameters called near and far, correspond to how close and how far the camera can see. Any object or part of the object closer to the camera than the near value or further away from the camera than the far value will not show up on the render.
🤔
Camera
class is not meant to be used directly, instead there are some common camera types that can be used instead.ArrayCamera
- render multiple camera, sort of like in split-screen video games.StereoCamera
- mimics the eyes tto create 3D scenes for VR headsetsCubeCamera
- can be used to be render your surroundingsOrthographicCamera
- renders of your scene without perspective, so all elements have the same size regardless of distance from cameraPerspectiveCamera
- simulates a real-life camera with perspectivewindow.requestAnimationFrame()
. I like the way Bruno puts it. requestAnimationFrame
is not for animations, it’s for running a function on each frame.new THREE.Clock()
build in with the .getElapsedTime()
method returns back the elapsed time in seconds. With this normalised value we can guranteed with can animate object consistently regardless of the computer it is on.Math.PI
will rotate an object 180 degress. Math.sin
and Math.cos
can create a wavy animation returning back to 0// Move object back and forth
GSAP.to(mesh.position, {
x: 2,
duration: 1,
delay: 1,
});
GSAP.to(mesh.position, {
x: 0,
duration: 1,
delay: 2,
});
Three.js journey fun, positioning.
mesh
and camera
objects are inherited from the base Object3D
class. which has the property position
. the position
has x
, y
and z
properties to control the position on a canvas.
x
is right - lefty
is up - downz
is forward- backmesh.position.length()
returns the length of the object from the center of the scene.
mesh.position.distanceTo(camera.position)
return the distance of the object to a given camera object
mesh.position.normalise()
rounds up the length to whole number.
mesh.position.set()
is a quick way to move the position of an object. it takes 3 args x
,y
and z
.
View the a visual helper of the axes with AxesHelper()
. blue = z , green = y m red = x. The length of the line represent 1
unit.
const axesHelper = new THREE.AxesHelper();
scene.add(axesHelper);
normalise your values and stick to it. e,g 1
=== 1km
. It can change from project to project, but just be consistent.
Scaling an object works in the same way:
mesh.scale
which also has x
, y
and z
propertiesposition
Rotations are slightly harder
rotation
or quaternion
rotation
object is an Euler class. This requires you think about what axis the object is rotating on.pi
either 3.14159
or Math.Pi
a whole rotation is pi x 2
x
, y
and z
properties in a different orders. remember how imperative the execution order is rotation.reorder
can you change the rotation orders by applying a string in the order you want e.g rotation.reorder('YXZ')
quaternion
is a mathematical way that can get around these gimbal locks. It’s a representation of the rotation which will “just work”. This is a black box for now.lookAt()
is a really useful method to look directly at another Vector3
object. e.g mesh.position
. you can do something like this camera.lookAt(mesh.position)
so the camera tracks an object.
3D objects get very complicated, so if you want to create lots of 3d object and group them use new THREE.Group()
. Objects can be added to the group so that they can be rotated and scaled all at the same time.
Started the first real coding excercise of threejs journey. starting slow with the basics.
Basics of a scene. we need a scene
, geometry
and material
. We combine the geometry
and material
together to make a mesh
which can be added a to a scene.
Next we need a camera
that also needs to be added to the scene
Finally the scene and camera needs to be placed onto a renderer
. There are many types of renderers but the main one is THREE.WebGLRenderer
system errors
and expected errors
Still making my through “Intro to Apollo Client 3.0”
fields
property in the apollo cache typePolices
to create computed fields which are dynamically created per API request. e.g you could combile a first and last name together like this.id
or u have specified it via the keyFields
type policy. This important becasue, if you wanted to query just a singular of the “Things”. Apollo would still make a network request, even though it already has the information in the cache. To let apollo know that the data may already be in the cache we have to specify read
rules for a the query in the InMemoryCache
. Make use of the toReference
callback function, which requires u to provide the __typeName
and the unique identifier configured for that resolver. see 1h40
.evit()
method.Nullish coalescing operator ??
is handy. It’s similar to tge logical OR operator ||
, but it also determines 0
or ''
(empty string) as falsey.
keyFields
type policyid
and __typeName
, so it’s important to return an id
if possible.back on the GQL / apollo training. Having a second look at setting up the backend from scratch. referening my old learning journal which was kinda meta.
This guide is still the best and most comprehensive i’ve come across so far which is specific to node, ts, apollo and prisma.
Funneling the prisma types into codegen, means you get all of prismas auto-generated types along with the types of your graphql scheama when writing your resolvers. It’s really elegant.
Learning how to use .graphql
directly in a node project. stack overflow to the rescue
@graphql-tools/load'
which is also what stack-overflow recommends.import { loadSchemaSync } from "@graphql-tools/load";
import { GraphQLFileLoader } from "@graphql-tools/graphql-file-loader";
export const schema = loadSchemaSync("./src/schema/*.graphql", {
loaders: [new GraphQLFileLoader()],
});
so many different flavours of apollo-server!
apollo-server
. if you need to server other endpoints on the same node server, swap to apollo-server-express
there are also many other supported node backend including serverless!Moving on to the front end of the tutorial
I am only just realising now you get GQL auto completion within the vscode if you have the apollo GraphQL client extension installed
Finished the full stack apollo tutorial
makeVar
function. You can define reactive varibles client side like so// Initializes to true if localStorage includes a 'token' key,
// false otherwise
export const isLoggedInVar = makeVar<boolean>(!!localStorage.getItem('token'));
// Initializes to an empty array
export const cartItemsVar = makeVar<string[]>([]);
This can be acessed in Mutation hooks via any of the callback functions simply by calling the name of the reactive var as a function e.g isLoggedInVar()
Need to play around with cache.modify
in more detail. This can let us modify the apollo cache in any way.
So Apollo Studio is frikkin awesome!?
It’s possible to apollos cached feilds . My mental model for this is still not well fleshed out. Need to go deeper…
If a declared field’s type is in [Square Brackets], it’s an array of the specified type. If an array has an exclamation point after it, the array cannot be null, but it can be empty.
the resolver for a parent field always executes before the resolvers for that field’s children.
it’s best to keep resolvers very small. offload complex logic to seperate xAPI.ts/js
or xManagers.ts/js
file . This way if there are any changes to your backend, resolver logic does not need to be touched.
Top level resolvers will still execute even if there are some custom resolvers which we have not written for yet. This is becasue Apollo defines a default resolver for any field you don’t define a custom resolver for.
React.useCallback
at workreact dev tools highlight updates when components render
options in react dev tools.The React Developer Tools highlights components that are re-rendering at a given point in time. Depending on the frequency of updates, a different color is used. Blue shows infrequent updates, ranging to green, yellow, and red for components that update frequently. source
refetch
policy on on mutations forces the cache to update, however this means there are two network requests. not the end of the world…update
callback, within this callback you have access to the cache
object which has some additional methods you can utalised like edit
and writeQuery
.react-dnd
has a slightly steep learning curve, but the actual usuage of the API is quite nice.DndProvider
and provide it with the HTML5Backend
becasue we’re working with web pages.import React, { Component } from "react";
import { render } from "react-dom";
import { DndProvider } from "react-dnd";
import { HTML5Backend } from "react-dnd-html5-backend";
import "./style.css";
function App() {
return (
<DndProvider backend={HTML5Backend}>
<div class="app-container">....</div>
</DndProvider>
);
}
render(<App />, document.getElementById("root"));
useDrag
and useDrop
useDrag
, we deconstruct the react-dnd
props in the first item of the array, in this example it
const [{ isDragging }, drag] = useDrag(() => ({
type: 'CARD',
item: { test: 'test', type: 'CARD' },
collect: monitor => ({
isDraggining: !!monitor.isDragging
})
}));
collect
callback function.react-dnd
. Making a start on this youtube seriesBackends
Item Types
Monitors
Collectors
monitors
into react props. This refered to as “collecting props”Drag Sources
Drop targets
monitors
monitors
are coverted to state which is available to react via a context
Props and composition give you all the flexibility you need to customize a component’s look and behavior in an explicit and safe way. Remember that components may accept arbitrary props, including primitive values, React elements, or functions.
If you want to reuse non-UI functionality between components, we suggest extracting it into a separate JavaScript module. The components may import it and use that function, object, or a class, without extending it.
no designer can anticipate all possible layouts. Nor should an engineer attempt to code layout exactly as specified by design
re- reading Good Bye Clean Code. Abstraction comes at a cost.
My code traded the ability to change requirements for reduced duplication, and it was not a good trade
the article also touches upon apathy for your fellow developers whom you work with. how would you like it if someone smashed down your lego empire to rebuild it their way
I didn’t talk to the person who wrote it. I rewrote the code and checked it in without their input. Even if it was an improvement (which I don’t believe anymore), this is a terrible way to go about it. A healthy engineering team is constantly building trust. Rewriting your teammate’s code without a discussion is a huge blow to your ability to effectively collaborate on a codebase together.
on let
vs const
. DA concludes just use whatever has been enforced in the codebase, it probably does not matter what you use. But it is at least worth understanding some of the subtle differences between them.
tldr use linter but…
Finally, remember that linters exist to serve you. If a linter rule annoys you and your team, delete it. It may not be worth it. Learn from your own mistakes.
grow
property to force elements to wrap in the condition where they can’t fit in the viewport anymore.null
vs undefined
null and undefined: These are two special values. They’re special because there’s a lot of things you can’t do with them — they often cause errors. Usually, null represents that some value is missing intentionally, and undefined represents that a value is missing unintentionally. However, when to use either is left to the programmer. They exist because sometimes it’s better for an operation to fail than to proceed with a missing value.
Closure: Normally, when you exit a function, all its variables “disappear”. This is because nothing needs them anymore. But what if you declare a function inside a function? Then the inner function could still be called later, and read the variables of the outer function. In practice, this is very useful! But for this to work, the outer function’s variables need to “stick around” somewhere. So in this case, JavaScript takes care of “keeping the variables alive” instead of “forgetting” them as it would usually do. This is called a “closure”. While closures are often considered a misunderstood JavaScript aspect, you probably use them many times a day without realizing it!
prisma.findMany({
where: { record: some.id }, // optional
take: 1,
orderBy: { createdAt: "desc" }, // update depending on if you need first or last item in the db
});
GraphQLFileLoader
util from graphql-tools
is a way to load .gql
files into apollo without needing to use raw strings to construct the graphql file.Don’t treat “receiving props” as a special event. Avoid “syncing” props and state. In most cases, every value should either be fully controlled (through props), or fully uncontrolled (in local state). Avoid derived state when you can. And always be ready to render!
I like these principles:
- Don’t stop the data flow. Props and state can change, and components should handle those changes whenever they happen.
- Always be ready to render. A component shouldn’t break because it’s rendered more or less often.
- No component is a singleton. Even if a component is rendered just once, your design will improve if rendering twice doesn’t break it.
- Keep the local state isolated. Think about which state is local to a particular UI representation — and don’t hoist that state higher than necessary.
useMemo
useCallback
in React is still confusing… feel like it’s getting spammed as a safety net. How do you definitivelt know that it’s needed or if there is a “better” way.useEffect
. I keep needing to read and re-read sections of it .If your mental model is “dependencies let me specify when I want to re-trigger the effect”, this example might give you an existential crisis.
Yes this is exactly how I’m feeling right now
Sometimes it is important that your useEffect
runs only once, so the empty useEffect
is still valid approach (I think!)
The techniques to not lie about deps is very interesting.
setState
looks very handy.useReducer
is usually a good idea to keep logic and state clean when inside a useEffect
it decouples Updates from Actions. useReduce
is the cheatmode of hooks. avoid putting your reducer inside a component , but do so if you need to read props from the component.
If you only use some functions inside an effect, move them directly into that effect:
Started reading 7 habbits of highly effective people:
prisma 2 schema auto completion rocks! but it auto completes with PascalCase
watch out for that!
placing the prisma
instance into the ApolloServer
context
is a good idea! it means that when we create our mutations and resolvers we have access to prisma via context e.g prisma.context
.
The real magic is when when combine the prisma autogenerated types with GraphQL Code Generator
context
.
context
argument. import { Resolvers } from "./generated/graphql";
export const resolvers: Resolvers = {
Query: {
AllPosts: async (_, args, context) => {
},
},
Mutation: {
AddPost: async (_, args, context) => {},
LikePost: async (_, args, context) => {},
},
};
when working with apollo, you dont need to be ‘smart’ about reusing state which you already have and try to pass it around.
Apollo has cache which it taps into which means a fetch may not necessarily go over the network. But from a dev standpoint we be quite liberal about continuously running refetches on the client.
model Link {
id Int @id @default(autoincrement())
createdAt DateTime @default(now())
description String
url String
postedBy User? @relation(fields: [postedById], references: [id])
postedById Int?
}
model User {
id Int @id @default(autoincrement())
name String
email String @unique
password String
links Link[]
}
Link
has a relation to User
. Link
uses the postedById
feild to reference an item in the User
table via its id
feild in the User
table.
These are touch points when adding a new table and graphql api endpoint
prisma.schema
npx prisma migrate dev
to update the prisma dbnpx prisma generate
to update the prisma client libsschema.graphql
file to include any new mutation definitions.
ApolloServer
instancefragment addressDetails on User {
name
street
zipcode
city
}
{
allUsers {
...addressDetails
}
}
it’s possible to paramtize and add default args to queries:
type Query {
allUsers(olderThan: Int = -1): [User!]!
}
use aliases when making multiples queires with different args
{
first: User(id: "1") {
name
}
second: User(id: "2") {
name
}
}
Union types work exaclty like TS:
type Adult {
name: String!
work: String!
}
type Child {
name: String!
school: String!
}
union Person = Adult | Child
Then we can use conditional fragments if the out of the union type returns different things:
{
allPersons {
name # works for `Adult` and `Child`
... on Child {
school
}
... on Adult {
work
}
}
}
Graphql security is a can of worms…
next-transpile-modules
when including the ScrollTrigger plugin.useRef
and useEffect
is your friend.'next/dynamic'
is. Got 3x reduction in my intial JS load my implementing this! Big easy wins!count
and it was there! . Also the schema.prisma
file makes a lot of sense when you have lots pre written tables written already. You can see how different relationships are created.is this microfrontends?
useEffect
.
“But I only want to run it on mount!”, you’ll say. For now, remember: if you specify deps, all values from inside your component that are used by the effect must be there. Including props, state, functions — anything in your component.
import { ReactComponent as MyIcon } from "./icon.svg"
is a thing if you have webpack setup correctlywhen anyone starts talking about various testing categories, dig deeper on what they mean by their words, as they probably don’t use them the same way as the last person you read did.
vh
in css is an interesting way to to get an overlapping effectflex
makes layouts so much easier to reason with. The hardest thing is thinking about responsive design in mockups which dont factor it. I want to always incorporate responsive design into web work that I do.interpolating responsive design into my work is tricky but flex honestly makes it a lot easier! background: url(/assets/mountain.png), url(/assets/whisp.png);
can take multiple url()
’s ! https://developer.mozilla.org/en-US/docs/Web/CSS/backgroundtest.each
for the first time. The syntax is kinda confusing but I grokked it in the end.
test.each([
[1, 1, 2],
[1, 2, 3],
[2, 1, 3],
])(".add(%i, %i)", (a, b, expected) => {
expect(a + b).toBe(expected);
});
The array of arrays passed is the variables and expected params for your test. The last item in the array is the expected
value . You can have any number of values before which represent the aruguments or conditions of your text.setTimeout
with a variable, then change the variable before the setTimeout
has executed, the setTimeout
‘remembers’ the old value still!useState
lets you persist state between refreshes/effects. Remember when you update a value with useState
this cause a re-renders!useRef
lets you mutate values without casing a causing re-rendersdeps
in a useEffect
is a way to let React know the exact dependancies the component needs to “watch” for changes to then refresh. But don’t ever lie about deps as a workaround to prevent a refresh. This can cause unintended consequences.${props => { props.thing // do stuff with props.thing }}
.tsx
files helps cleanup big files but feels like it removes the benefit of single file .tsx
files.git rebase
in a real project. Have a better understanding of the uses cases of when to use git rebase
vs git merge
git merge
takes everything from another branch and stick in front of all the commits in your branch. This can produce a “messy” git timelinegit rebase
“re-writes” history by smartly merging your commits where they would have resided chronologically with another branch and slots in commits into the correct place. This can often help with conflicts, but not always.Read through styled-components
docs. Using a lot of JS-in-CSS at work so finally taking the time to understand how this library works…
const Button = styled.button`
/* Adapt the colors based on primary prop */
background: ${props => props.primary ? "palevioletred" : "white"};
color: ${props => props.primary ? "white" : "palevioletred"};
font-size: 1em;
margin: 1em;
padding: 0.25em 1em;
border: 2px solid palevioletred;
border-radius: 3px;
`;
render(
<div>
<Button>Normal</Button>
<Button primary>Primary</Button>
</div>
);e"};
color: ${props => props.primary ? "white" : "palevioletred"};
styled()
const TomatoButton = styled(Button)`
color: tomato;
border-color: tomato;
`;
// Create the keyframes
const rotate = keyframes`
from {
transform: rotate(0deg);
}
to {
transform: rotate(360deg);
}
`;
// Here we create a component that will rotate everything we pass in over two seconds
const Rotate = styled.div`
display: inline-block;
animation: ${rotate} 2s linear infinite;
padding: 2rem 1rem;
font-size: 1.2rem;
`;
render(<Rotate>< 💅🏾 ></Rotate>);
Write global styles with <ThemeProvider>
and createGlobalStyle
you can write inline css-(ish) as sort of way to bypass styled components entirely
It’s still possible to target children classNames
within a styled component
const Thing = styled.div`
color: blue;
.something {
border: 1px solid; // an element labeled ".something" inside <Thing>
display: block;
}
`;
render(
<Thing>
<label htmlFor="foo-button" className="something">
Mystery button
</label>
<button id="foo-button">What do I do?</button>
</Thing>
);
git cherry-pick <git-hash>
to pick a specific commit id and move it into your branch. When cherry picking multiple commits it really important to do it in the correct order, the later git commits should be commited first!let
or const
in a react component wont persist between re-renders. So if you initiate an instance of a class and bind it to a plain variable, the instance of the object will be lost if the component re-renders. To make sure the instance persists between re-renders we should use something like useState
or useRef
. What’s useful about useRef
is that mutating it does not re-render. This is perfect for binding to something like a mapboxGL
instance where the object is constantly changing.useEffect
which feels kinda wrong.assume zero knowledge but infinite intelligence
requestAnimationFrame
.
props.children
in React. This is the equivalent of <slots/>
in Vue.js. Anything within react components tags can be accessed in the accessed via props.children
.useState
and setup needs to be passed into useEffect
. It all results in mapbox not behaving as you might expect.@mapbox/polyline
. This then lets us plug it into the mapbox SDK easily.MUI is a toolkit, not a straight jacket.
refactoring a 3000 line code base into 200 is kinda theraputic.
Resolve the red squiggles and save the day.
After learning the basics of yarn workspaces, I’ve gone down a rabbit-hole of learning about microfrontends. This is an advanced frontend architecture pattern that lets you stitch together different parts of your app using different frameworks. On the surface this sounds like an awful idea, the runtime overhead must be pretty bad if you’re using React, svelte and vue all in the same app. However, the buisness case for when a very large apps is trying to move away from a specific framework, or if you want to different teams to own different sections of an app. e.g checkout and product navigation in an large e-commerce site. For own very basic use case of this I want to make lots of different utilities using any FE framework I want, but also stitch it altogether in a single SPA, then host it on netlify.
I think this is possible when you combine the various technologies like yarn workspaces, webpack 5 federation and microfrontend frameworks like [single SPA](https://single-spa.js.org/docs/getting-started-overview.
Learning webpack basics, something that i’ve been meaning to do! currently working through this tutorial
webpack-cli
installed via npm, all your source files just need to be in a ./src
directory, then running npx webpack
will bundle all everything and output to a /.dist
directory.webpack.config.js
file
.js
filesnode_modules
babel-loader
will look at your .babelrc
for how you want to transpile your js.module.exports = {
module:{
rules: [
{
test: /\.js$/,
exclude: /node_modules/,
use: {
// without additional settings this will reference .babelrc
loader: 'babel-loader'
}
}
]
}
}
webpack.config.js
settings, mode: 'development'
. bundles the js in a development mode which allows for source maps for easier debugging.devtool: 'source-map'
lets you see the og file which webpack bundled from and even set breakpoints.entry: './src/index.js',
is a way to provide a specific entry point, it defaults searching at ./src
path
needs to be imported in with const path = require('path')
.
output: { filename: 'bundle.js', path: path.resolve(__dirname, 'public') }
devServer: {contentBase: './dist'}
used along with webpack-dev-server
means you can run webpack serve
to have dev server with HMR support quite easily.webpack serve
bundles and serves the assets in memory so you will not see the changes written to disk.There is no need for a 3000 line react component.
jest.mock() vs jest.spyOn() Looks like here you are using jest.mock() and jest.spyOn() here on the same function. Usually, these are used interchangeably, but not together.
Another good trick to properly type mock
and mockResolvedValue
. Simply cast to jest.Mock
like so:
(axios.get as jest.Mock).mockResolvedValue(fakeResp);
Jest mocks and spies seems to make a bit more sense today.
jest.mock()
. For example with axios we would do jest.mock('axios')
.const mockedAxios = axios as jest.Mocked<typeof axios>
This provides us a with mocked instance of axios where we can overide specfic functions of axios like .get
and .post
with mock jests mockResolvedValue
a nice way of returning a fake resolved promise response.
const axiosSpy = spyOn(mockedAxios, "get"); //first args is the parent object, second is the property we want to watch
// not sure what you want to do if you want to just watch the parent object 🤔
expect(axiosSpy).toHaveBeenCalledTime(2);
expect(axiosSpy).toHaveBeenCalledWith("./jsonEndpoint");
Revised mocking, spies and unit testing with Jest today. Revisiting fundamental testing concepts via some great blogs by Kent C Dodds:
jest.fn()
, lets you mock the implementation of method that might be expensive or flakey to run. Usually you write a mock implementation to return some fake data like jest.fn((x)=> x)
a.k.a jest.fn(<any arbitary function>)
Kent D Dodds talks about ‘monkey patching’ using jest.fn()
which term which means to overide the exsiting functionality, which is useful when we need to mock a library. We can simply overide the implementation with our mock.
const originalGetWinner = utils.getWinner;
utils.getWinner = jest.fn((p1, p2) => p2);
When using plain JS, utils.getWinner
gets new properties adding in by jest such as .mock.instances
, .mock.calls
and mock.results
which are using for a range of different asserations like what arguments were passed in, where the original instance from and what the actual result was.
TS will complain that mock
does not exist on the property, to get around this we can use a Jest spy instead. which a way to watch
when the function/method was called. it looks like this:
utils.getWinner = jest.fn((p1, p2) => p2);
const spy = jest.spyOn(utils, "getWinner");
const winner = thumbWar("alex", "jenny");
expect(spy).toHaveBeenCalledTimes(2);
expect(spy).toHaveBeenCalled();
spy.mock.calls.forEach((args) => {
expect(args).toEqual(["alex", "jenny"]);
});
it offers all the same functionality that is exposed on a .mock
property.
schema.graphql
filenpx prisma migrate
to create schemas in the dbnpx prisma generate
to generate the types which will be used for graphql resolvers and also FE.nexusPrisma
plugin is “aware” of the new schemas in prisma if setup correctly which mean defining the objectType
s is stright forward as t.model
will be aware of what properties are available.makeSchema
function.queryType
function.nexus-plugin-prisma
. My understanding is that it was maintained by nexus but now deprecated to be replaced by nexus-prisma
, which does not have parity with nexus-plugin-prisma
, most fundamentally t.crud
which automagically creates a crud api from your gql scheama.More hands on leanring with prisma 2 today. Followed this tutorial to setup up a Next, TS, Prisma, GraphQL and have CRUD operations with Nexus.
pages/api
directory, then add the following boilerplate to a file called graphql.ts
import { ApolloServer } from "apollo-server-micro";
import { schema } from "src/schema";
import { createContext } from "src/context";
const server = new ApolloServer({
schema,
context: createContext,
tracing: process.env.NODE_ENV === "development",
});
const handler = server.createHandler({ path: "/api/graphql" });
export const config = {
api: {
bodyParser: false,
},
};
export default handler;
context is factory function which returns an instance of Prisma:
//createContext.ts
import { PrismaClient } from "@prisma/client";
const prisma = new PrismaClient();
export interface context {
prisma: PrismaClient;
}
export function createContext() {
return {
prisma,
};
}
Learning about the basics of Prisma 2. The slightly confusing thing about Prisma 2 is that is many things, it’s worth understanding that it is totally different product to Prisma 1. Not drilled into the difference between the two product but some useful information here.
What I understand about Prisma 2 so far.
schema.prsisma
file which uses graphql-like syntax.datasource
model
blocks. There are lots of additional attributes/directives/modifers that can be used, so that you have more control over the feild types and creating relationships with other tables.prisma migrate
which will create real db schemas based on what was provided in the prisma.schema
file. It will handle db migrations too.@prisma/client
. Under the hood , the client knows to read the prisma.schema
and generate a client that is tailored to your models.test-id="InputForm"
to elements that needs to be targeted by the Cypress API. Sometimes when component APIs are not using sematic HTML, it can make testing quite difficult. For example some drop down lists in material ui are a div
with li
elements in them. When there are no test tags that can be targeted, we may need to something “hacky” and target id or aria-ids instead, then search for (sibling)[https://docs.cypress.io/api/commands/siblings#Syntax] elements instead.example cypress test
context('Example', () => {
it('example cypress test', () => {
cy.get("[data-testid='Login-Button']").click();
cy.get("div[data-test-id='Login-Feild']")
.find('input')
.scrollIntoView()
.should('be.visible');
}
)
Today has been about learning about working with forms in React with Formik in the deep end. Took me a while to grok the API but I think I finally get it now.
initialValues
prop. <Formik
initialValues={{ name: 'jared', items: ['happy', 'sad', 'mad'] }}
onSubmit={(values, actions) => {
setTimeout(() => {
alert(JSON.stringify(values, null, 2));
actions.setSubmitting(false);
}, 1000);
}}
>
{// ...components }
</Formik>
initialValues
object can be acheived with the <Field />
component<Formik initialValues={{ name: "jared", items: ["happy", "sad", "mad"] }}>
<Field as="select" name="color">
<option value="red">Red</option>
<option value="green">Green</option>
<option value="blue">Blue</option>
</Field>
</Formik>
<Formik
initialValues={{ name: 'jared', items: ['happy', 'sad', 'mad'] }}
>
<Field name="color" render={()=>(
<Button onClick={(e)=>{
e.value = 'pink';
}}>
)}>
</Formik>
oberservables
.epic
which represents “pipes” of your async code. All async data flows through these pipes and our pipes can run actions to based on this data, it works well with asycn processes in redux. RxJS also comes with a bunch of ready to go methods to managing data, such as map
, reduce
and debounce
so it can also be considered a utility lib like lodash but for async code.Promise
. A Promise can resolve
or reject
. A Observable has three arguments.
// example oberservable
myObservable.subscribe(
(value) => console.log("next", value),
(err) => console.error("error", err),
() => console.info("complete!")
);
Second week at NB and getting my teeth into more TS/React. Typescript still makes my eyes go crossed, I’m hoping things get easier I familiarise myself with the codebase!!
Reading more of So Good They Can’t Ignore You. Summary of the possible control traps:
Some really awful solutions.
But this is the best one i’ve found.
async function printFiles() {
const files = await getFilePaths();
for await (const contents of files.map((file) => fs.readFile(file, "utf8"))) {
console.log(contents);
}
}
setTimeout
. In the context of the OSM service, it checks the time of the OSM changeset, gives it a small offset relative to the current time, then fires it into a setTimeout
. The effect is like the data is being streamed in. The technique could also be used to throttle the amount of requests being made to an API.Learning about some React component organisation patterns:
Read some more of “So good they cant ignore you”.
Cant sleep, reading a few pages of so good they can’t ignore you.
Started work on converting Musical OSM into TypeScript
Wrapped up on the last project of Full stack React, TS, Node & GraphQL. Got everything I wanted from it in terms of learning about about to setup GraphQL from scratch. was a good bonus to also learn about Redux and TypeORM and has highlighted some gaps in my knowledged around data modelling.
Hacked about on something special. Massively inspired by Trams in Helsinki. I remember being in total awe seeing that visualisation for the first time. I always said I would create something similar once I got good enough at coding. I feel like a sense of coming full circle after creating this and just in time before starting my new job a full time JavaScript Developer.
useMutation
hook. First argument is a string of the GQL query. It also another options object of which one of the properties is refetchQueries
. here another GQL query can provided which which be executed after the mutation has completed. This is useful as some mutation may not return any data so a follow on query is often required.httpOnly
so it cant be tampered with JavaScript.Learning how to use apollo client in a React code base
ApolloClient
, InMemoryCache
and ApolloProvider
from "@apollo/client";
in our root react component a.k.a index.tsx
ApolloClient
instance like so:const client = new ApolloClient({
uri: "http://localhost:9090/graphql",
credentials: "include",
cache: new InMemoryCache({
resultCaching: false,
}),
});
Esure the backend has cors enabled and it aware of what domain and port the react client is running under.
Finally we can wrap main React <App/>
in the <ApolloProvider/>
and provider it a client prop which is is the client
object created above.
Now we can make queries to the GQL server in any component like so:
import React, { useEffect, useState } from "react";
import { gql, useQuery } from "@apollo/client";
const myGQLQuery = gql`
query GetAllCategories {
getAllCategories {
id
name
}
}
`;
const MyComponent = () => {
const { loading, error, data } = useQuery(myGQLQuery);
return (
<div className="leftmenu">
{loading ? (
<span> loading </span>
) : error ? (
<span> Error </span>
) : (
<span> data </span>
)}
</div>
);
};
export default MyComponent;
Learning about useLazyQuery
hook in apollo client. Which lets you execute graphql quries on demand. Also useful when used in tandem with graphql variables, when using making the call the graphql query in javascript you can supply a parameter to the graphql argument by supplying a an object which has a variables
object nested within it.
repo
folder. Note these also act as the types used by TypeScript too!<TableName>Repo
, there can be any arbitary logic in the file in this file and TypeORM provide really every API required to to query, filter and join data which can be made use off. The functions can be exported then used in GraphQL resolvers or even in just a regular REST API.typeDefs
file which acts as the contract for implmentation in a resolvers
file.typeDefs
can now be carried out in the resolvers
file. Rise and repeat for every CRUD operations required for your app.typeDef
, i,e defining the shape of all the data schemas up front, the types of mutations and the types of queries which can be performed. Once all defined its writing the “real” logic in the resolvers.__resolveType
. If a query of mutation resolves to a union type, it’s important that the type is also implemented using __resolveType
. Further readingMore backend work for the SuperForum project as part of Full-Stack-React-TypeScript-and-Node. Using decorators with TypoORM is quite a struggle, not fully grasping the syntax entirely but getting things to work somehow. Database modelling is still a bit of mind warp for me, definitely a weaker area in my skill set and something I should try to improve upon, creating table realtionships is still especially hard for me to get my head around for some reason. The API for querying your data in TypeORM feels much more comfortable. Having nice methods like findOne
and create
feels like coding with Firebase or Mongo again.
Learning how the architecture for authentication looks like if you were to implement it from scratch:
User
table can hold the record of the user and include fields such as Confirmed
to indicate if they user account has been verified.Crash course on using Redis for managing session state in an app and also TypeORM for managing db schemas in PostgreSQL. Suddenly a lot of new technologies to get my head around. A few things I’m pondering… how much of redis can be offload to browsers localstorage instead for state? This textbook has a big emphasis on server side state management, which they claim provides you more control over the users environment. What kind of apps need that level of control?
First time dabling with an ORM and i’ve always understood their purpose and can definitly see their benifits. Like with any abstractions, it’s always good to learn the the lower level technology when possible. ORMs make me slightly uncomfortable as I’m giving up the how the SQL is being written. I think these abstractions only make sense when you have multiple developers and there needs to be some sort of conformality …
Learning some interesting React.js design patterns:
This is a pattern whereby you create a component which sole purpose is to render more components. There are some clear benifits of creating a factory component which I can think of: - The factory component can contain the logic of how/when a certain is to be rendered. For example if the factory is provided an array of objects as a prop, it could conditionaly on a compoents bases on specfic properties on each object. - It creates a seperation of concern for the logic of how conditionally rendering a component and also the generic component that is being rendered. - When we abstract logic out into a seperate component like this, it makes it easier to optimise the component by wrapping it all in a React.memo
.
dispatch
down as prop🔗This a bit of mind warp, but once you have setup a useReducer
in a component it’s possible to create child componets that accept the dispatch
function as a prop. This allows for the child components to mutate the state of the parent component. The tricky thing is knowing what context of the dispatch is in play when using it in the child components.
useEffect
and starting to see how it’s commonly used along with the useState
hook in an API data fetching manner.useEffect
is used everywhere and clearly an important hook, but it’s difficult to parse in my head, perhaps it just requires more practice. I naturally want to compare it to Vue which has a much user friendly API. It’s no wonder why front-end frameworks are such a common subject for debate. They are both just tools at the end of the day.Debating about which tool is better made does not matter if you dont do anything meaningful with the tool.useReducer
hook heavily for the the Register and Login components. The userReducer
excels at letting you micromanage specfic properties in an object. As react does not let you mutate things directly, we can only mutate state via useReducer
using the dispatch
function which is made available when you deconstruct the useReducer
function.When we deconstruct userReducer
we have get access to an array which contains two things, the first item is the state
object, the second item is the dispatch function used for mutating the state
.
The useReducer
function itself takes two arguments. A reducer
function (a function with a switch-case to action something dependant of the type action being called), the second argument is the inital state object. Bascially what first instance of you state should look like.
When using useReducer
it’s common to also deconstruct the function call in the return array and also then deconstruct the state object within that array like so:
const [{ propA, propB }, dispatch] = useReducer(aReducerFunction, {
propA: "",
propB: "",
});
This is probably why userReducer
is kind of hard to understand at first glance. This even before we have talked about the reducer
function or in this example the aReducerFunction
.
The body of aReducerFunction
could look something like this:
export const aReducerFunction = (state: any, action: any) => {
switch (action.type) {
case "updatePropA":
return { ...state, propA: action.payload };
case "updatePropB":
return { ...state, propB: action.payload };
default:
return { ...state, resultMsg: "Not a valid action" };
}
};
In this example, if a valid action
has been provided i.e “updatePropA” or “updatePropB”, we spread the orignal state back into an object and ensure that the property for the relevant propty is updated. spreading the whole original state into a new object and returning it is very much “React” pattern which follows the function programming paradigm of not mutating objects directly. This is our way of updating a reactive object in React.
componentDidCatch
lifecycle event which is not available as a react hook yet. When you wrap the main <App/>
component in this error boundary you can catch any react specfic errors from all children components and render whatever we want. Note the Error Boundaries can’t catch errors in server-rendered react, errors in envent handlers or in asyncronous code. These are usually managed independantly using try
, catch
.graphql-middleware
. This lets us write functions that can called before every resolver. Example boilerplate for what custom graphql middleware looks like:export const log = async (
resolver: any,
parent: any,
args: any,
context: any,
info: any
) => {
if (!parent) {
console.log("Start Logging");
}
const result = await resolver(parent, args, context, info);
console.log("finished call to resolver");
return result;
};
Learning how to write unit test for graphql query resolvers
testGraphQLQuery
wrapper function, takes the schema
, source
and variableValues
:import { graphql, GraphQLSchema } from "graphql";
import { Maybe } from "graphql/jsutils/Maybe";
interface Options {
schema: GraphQLSchema;
source: string;
variableValues?: Maybe<{ [key: string]: any }>;
}
export const testGraphQLQuery = async ({
schema,
source,
variableValues,
}: Options) => {
return graphql({
schema,
source,
variableValues,
});
};
makeExecutableSchema
function we can pass in our real type definitions and resolvers as the schema
parameter.source
parametervariableValues
parameter. Example code:import typeDefs from "./typeDefs";
import resolvers from "./resolvers";
import { makeExecutableSchema } from "graphql-tools";
import faker from "faker";
import { testGraphQLQuery } from "./testGraphQLquery";
import { addMockFunctionsToSchema } from "apollo-server-express";
describe("test getting a user", () => {
const GetUser = `
query GetUser($id: ID!){
getUser(id: $id){
id
username
email
}
}
`;
it("get the desired user", async () => {
const schema = makeExecutableSchema({ typeDefs, resolvers });
const userId = faker.random.alphaNumeric(20);
const username = faker.internet.userName();
const email = faker.internet.email();
const mocks = {
User: () => ({
id: userId,
username,
email,
}),
};
addMockFunctionsToSchema({ schema, mocks });
const queryResponse = await testGraphQLQuery({
schema,
source: GetUser,
variableValues: { id: faker.random.alphaNumeric(20) },
});
const result = queryResponse.data ? queryResponse.data.getUser : null;
expect(result).toEqual({
id: userId,
username,
email,
});
});
});
Learning how to write my own graphql mutation resolvers. Not too different from writing a query resolver.
Mutation
resolver object.parent
, args
, ctx
, info
. When working with TS we can further annoate the args
to see what is available. These are the argemenents which will be provided by the consumer of the graphql mutation.GraphQL subscriptions as per the name gives your a real-time subscription to a graphql scheama to notify any changes. Apollo also lets you wite your subscriptions and requires quite a bit more boilerplate code. It requires the use of the createServer
function from the node http
module. The http server serves the express app. This instance of the httpserver is then provided to the instance of apollo server so it is aware of the subscriptions. Once all set up. Writing our subscription logic follows the same pattern as GQL queries and mutations.
pubsub
method which can be destructured from the ctx
object. This lets you “publish” or “notify” GQL whenever this mutation has occuredSubscription
object. e.g: Subscription: {
newTodo: {
subscribe: (parent, args:null, { pubsub }: GqlContext) => pubsub.asyncIterator(NEW_TODO)
}
}
ApolloServer
and makeExecutableSchema
from apollo-server-express
. This lets you use express as the middleware for the Apollo server.makeExecutableSchema
express.Router()
which let your programme actions depending on the URI. Another is bodyParser
which let your tap into the req.body.message
property to access the payload without needing to work with data streams. There are many more express middleware available!Learnt about React Router for client side routing using the <Switch>
and <Route>
components.
the <Route>
can take a path
prop which will dictate what component to show depending on the url path
to show a component at spefic path use the component
prop which takes a React component as it’s input
function App() {
const renderComponentWithProp = (props: any) => {
console.log("screenC props", props);
return <ScreenC {...props} message="This is some data" />;
};
return (
<div className="App">
<Switch>
<Route path="/c/:userid" component={renderComponentWithProp} />
</Switch>
</div>
);
}
There are a bunch of additonal props availabe on your component when used withe the <Route>
component such as history
, location
and match
. This let you have programatic access to the react-router api. Alternatively you can tap into the react-router api using new hooks such as useHistor
and useParams
.
Revising the fundamental of creating an API with node.js using the http
module.
User-Agent
- what browser & OS is sending the responseReferrer
- the URL was on before linking to the current URLCookie
- text files which container more info about the user and session specific to the current website. Server can add almost anything into a cookie file. Usually has a session identifier and token.Content-Type
- the type of data which is in the body of the request e.g application/json
Access-Control-Allow-Origin
- Used with CORS to allow a different URL to make requests to the server. *
means any URL is allowedAllow
- indicates which HTTP verbs are supported.POST
requests. The data streamed in. Therefore we need to make use of the data
and end
events triggers to know when data has completed streaming in an async pattern, to then perform the required actions on the data. the patten is for the stream to be captured in an array which is then but into a “memory buffer” then finally we can consume it as for example a JSON object.React.memo
. This because when create a new context this is also a new React component which does not use memo
therefore changes to this parent causes a knock on effect to all children. There is no fix for this! 😢. Good Blog which expands upon this a bit more - https://leewarrick.com/blog/the-problem-with-context/Learning basics of Redux. Setting up Reducers which remind me of actions in Vuex. The reducer is a way to filter down what kind of action should be done depending on the action type which can be provided. The action type is simply a string.
createStore
and combineRedicers
from the redux package. Basic boilerplate for this looks like:// ./src/store/conigureStore.ts
import { config } from "process";
import { createStore } from "redux";
import { rootReducer } from "./AppState";
const configureStore = () => {
return createStore(rootReducer, {});
};
export default configureStore;
// ./src/store/AppState.ts
import { combineReducers } from "redux";
import { UserReducer } from "./UserReducer";
export const rootReducer = combineReducers({
user: UserReducer,
});
export type AppState = ReturnType<typeof rootReducer>;
// ./src/index.tsx
import React from "react";
import ReactDOM from "react-dom";
import App from "./App";
import { Provider } from "react-redux";
import configureStore from "./store/configureStore";
ReactDOM.render(
<React.StrictMode>
<Provider store={configureStore()}>
<App />
</Provider>
</React.StrictMode>,
document.getElementById("root")
);
useSelector
hook to tap into redux state, e.g :const user = useSelector((state: AppState) => state.user);
Read some more of “So good they cant ignore you” . Passion vs Craftsman mentality. Craftsman mind set means you can focus on what you can offer to the world. Some argue that you need passion to learn enable to have the desire to hone your skills further, but often the case is that when you get good at something you start to be come passioniate about it.
Skimmed over notes on how to write integration tests in React with Jest and Testing library. Testing library allows for your to easily target dom element and assert of text, elements or other components are appearing correctly. Jest provides capbilites to do snapshot testing and also mock functions and components to avoid running timely network calls. This is usually discouraged as we are diverging away from “real” behaviour, but sometimes can be useful.
Starting to learn about why you may want to consider using Redux for larger scale enterprise React apps. Redux may be useful in situation such as Auth where every component needs to be aware of this data. It’s unreasonable for every component to keep it’s own state for this which is why Redux can resolve this issue by being a single source of truth.
useCallback
react hook works in practice. The first argument is a callback function, the second argument is an array of props or state which the the hook is “watching”, it keeps the current values of this data in memory. Only if the value has changes will the callback function be executed.React.memo
to ensure that the component only re-renders if the props have changed. This provide a small perfomrance benefit.useState
, replaces state
and setState
in class components. used to update single values rather than objects. For complex objects useReducer
might be better.useEffect
, similar to componentDidMount
and componentDidUpdate
in class components. However, they are run before drawingon the screen happens. It takes a second parameter to watch a prop or state for changes. You use this hook this multiple times. Passing an empty array to the second parameter to this hook forces this to run only once.useCallback
, takes an instance of function for first argument, the second argument is an array of items that might change. This exists to save memory.useMemo
, similar to useCallback
. it can cache the results of a long running task, will only re-run in the the provided props or state that it subscibed to has changed, these are passed in as a second array argument.useReducer
, this similar to react redux. takes two parmeters reducer
and initial state
. It returns a state object and and disapatcher. The reducer filters what to do based on the dispatcher.useContext
, this allows for global state which can injected into any child regardless of hierarchy. Alternative approach is to use React Redux.useRef
, used to access the instance of element and opt out of reacts model. this does not trigger a re-render if the value changes.Read some more Full-stack React , TS & Node. Learning about React lifecycle methods in class components
When a component is mounting we have access to the follow methods:
constructor
the class contructor used for initalising stategetDerivedStateFromProps
, used for basing state on prop from a parent component. Use carefully as can cause re-rendersrender
used to render out JSXcomponentDidMount
, happens after a component has initialised . A good place for API calls. UNSAFE_componentWillMount
, as the methods implies, legacy method. Avoid!when a component is updating, we have access to the following methods:
shouldComponentUpdate
, used to decide if a re-redner should happen or not.getSnapshotBeforeUpdate
, capture the state of the dom before a render happens. Usually used alongside componentDidUpdate
componentDidUpdate
, run immediately after a re-render completes. Can make additional changes to the dom here, or after state. Important to have an exit condition so you dont create an infinite loop.when a component is unmounting, we have access to the following methods:
componentWillUnmount
, used for cleanup work like removing event listeners or subscriptionsTap into these lifecyles to help control re-renders. If re-renders get out of controls, the UX will suffer.
React team recommendations:
componentDidUpdate
is useful for triggering behaviour based on a prop change. React.memo
can control re-renders only when a prop has changed instead of when the parent changes.componentDidMount
for rendering state based on an API call. Note componentDidMount
only ever runs just once.ComponentDidUpdate
is useful for managing state based on prop changes. But generally try to avoid using derived state. Try to just use props directly and have state managed from a parent component.Read some more of “So Good They Cant Ignore You”. Following your passion is dangerous advice which can lead to job hopping, looking for the thing you’re interested in. We romanticise the idea of following your passion by looking at elite atheles and famous icons. In reality you must learn to love what you do. One way to do is with a craftsmanship mindset. When we measure our success and can see we are improving at something we start to enjoy it more.
Read some more Full-stack React , TS & Node. Revised react basics with create-react-app.
bind
, call
bind
is used to replace the instance instance of this
. if a function call that is reliant on this
it will have been altered to the new contextcall
is used at the the function is being called.apply
is similar to call but the second argument you provide is an array of argument which will be supplied to for the method that is being called.reduce
worksMore Fullstack React, TS & Node
Abstract classes used when you just want to specify the signation of method in a class. This allows for other classes which extend off it to implment their own versions of the methods, this is also known as overriding.
Interfaces are another way to provide to write a contract for your code. They contain no implemtation details.
Generics allow for your type definitions to include an associated type chosen by the user. Usually used when the exact type is unknown but will be used in various places like an argument.
optional chaining is useful for handling unknown object properties so that they are returned as null or undefined
nullish coalescing is a new shortcut for the ternary operator
val1 ?? val2; // return val1 if it is not null or undefined, otherwise return val2
nullish coalescing is checking specfically for
null
orundefined
. use||
to check for truthyness
private
,readonly
and static
keywords are used in classes.static
types cant only be accessed via the class name, therefore references to a class static property is forbidden. static
type are therefore useful for sharing state across multiples instances of the same class.super
keyword is the reference to the parent class you are inheriting from.
protected
keywords means that the property is still private to the class but anything inheriting the class has access to the member.private
key word“I’ll live the focused life becasue It’s best kind there is.”
Estimate how long you’d normally put aside for an obligation… give yourself a hard deadline that drastically reduces this time…There should only be one possible way to get the task done…working with great intensity
Four disciplines of deep work:
Focus on the wildy important - focus on small number of highly ambitious goals.
Act on the lead measures - these are bheviours that drive success o lag measures. Lag measures are restrospective results which come after apply a behaviour. A good example lead measure is to time spent in a state of deep work dedicated to your wildy important goals
Keep a compelling scoreboard - visualise your lead measures. Can be as simple as measuring how long of a time you are spending a in deep work state with card on a peice of paper.
Create a cadence of accountability - using your scoreboard, commit to a goal e.g 5 hours of deep work in a week.
“It’s not so much the intensity of deep work but the regularity”
Be Lazy
Little bit of progress on the supabase vue slack clone. Working on styling via tailwind, not getting very far
Read more Deep Work. Understanding the pitfalls of deep work
You have finite amount of willpower that becomes depleted as you use it
Feel like i’m reading atomic habbits. Dont rely on willpower, build up your habits to cultivate time for deep work.
A deep life is a good life, any way you look at it.
Section 17. PostgreSQL Recipes
which concludes the basics section at https://www.postgresqltutorial.com/. Good set of tips which I can refer back to, especial removing duplicates sections. EXPLAIN
operation is fascinating, amazing that you postgres can break down the processes involved in a SQL operation, very useful for troubleshooting an inefficient script i suppose!INOUT
on stack overflow. Now what’s interesting is that supabase can call stored procedures… Kinda like a serverless functions!?Section 15. Conditional Expressions & Operators
, learnt about COALESCE
to default to null, good thing learning TS taught be about nullish coalescing which made this makes sense. Also revised CASE
for conditional statments, CAST
using ::
I think I now understand all the systax that is being used in the supabase slack clone starter SQL.Section 16. PostgreSQL Utilities
and have better idea of psql commands. Think I will stick to pgadmin though!Section 14. PostgreSQL Data Types in Depth
on https://www.postgresqltutorial.com. Great module on Array type and also Hstore and JSON data management all in SQL. Got me excited how useful these data types will be for web apps like Jottivity. Also learnt about custom domans and types for which are useful for executing common queries as database functions. Mind blown. I feel silly after many years of working with RDBMS at a very shallow level I am only now fully realising the full capabilities of something like postgres!Section 13. Understanding PostgreSQL Constraints
on https://www.postgresqltutorial.com. Great to finally have deeper undestanding of setting constraints of table for database level data checking. Also, learnt a lot more about Foreign keys which i thought i understood. Did’nt realise you can configure your child table records to automatically cleanup if the related parent record is removed via the CASCADE
option (so useful!).Section 14. PostgreSQL Data Types in Depth
. I thought it might be too basic, but i’m still learning more of the intricacies of data types like Date
and Timezone
. really cool to see so many inbuilt functions to manipulate date and times to extrapoloate and skew data in all sorts of different ways.Section 12. Managing Tables
. Lots to take in, too much to remember but good to have reference of all the types of operations avaialbe in postgres.” High-Quality work produced = (Time) * (Intensity of Focus)“. Get comfortable working in extended periods of deep work or get left behind.
Section 9. Modifying Data
on https://www.postgresqltutorial.com/ . All good stuff to revise againSection 7. Subquery
on https://www.postgresqltutorial.com/. Slightly confusing that they are similar to joins in many ways. Need to delve into this deeper some day and understand intricies of both. For now a regular join will probably suffice for most operations.Section 5. Set Operations
unions, interesets and except clauses.Section 6. Grouping sets, Cube, and Rollup
, but this youtube vide explained it well and I finally grocked it. I just dont understand when you want to have grouped data like that. CUBE
and ROLLUP
functions are interesting shortcuts to producing variations of grouped data from your table, but simarly, i just dont know why you would want to work with data like that. seems to makes more sense for it to be seperate queries to me.Section 4. Grouping Data
https://www.postgresqltutorial.com/. Learnt about the GROUP BY
clause for grouping attributes together. Also first time learning about the HAVING
clause. Very similar to WHERE
which is for filtering specific records HAVING
is used in conjunction with GROUP BY
to filter groups of records.Completed section Section 3. Joining Multiple Tables
Great to just cover the gaps in my SQL knowledge. Love this visulisation of different joins:
Cross joins and natural joins were new to me but was really well explained at https://www.postgresqltutorial.com/.
Finished reading Essentialism. Closing quote was good.
Ask you youself, “what is essential?” Eliminate everything else.
Book was ok, I apply alot of it in practice already and I try often to now overwhelm myself with too much going on. The whole book felt like snippets from Atomic Habbits, The One Thing and Do More with Less.
Section 2. Filtering Data
on https://www.postgresqltutorial.com/. All stuff I have used in practice, but good to understand how it works in bit more detail.Section 1. Querying Data
.Worked through https://www.robincussol.com/docker-for-js-devs-how-to-containerise-nodejs-apps-efficiently/. Nice tutorial on how to productionise an Next.js app (should apply fue vue apps) efficiently by doing the following :
Worked through Tutorial: Node.js live debugging
Read a few more pages of Essentialism. Focus on little wins and system than encourages progress. Pretty much what I am working on with this micro blog. Every small completed activity is a win and should be logged.
import
and how its a promise can be chainable. Also understand lazy loading and how can work along with Webpack to chunk bundles up.First time as a coach at codebar yesterday. a bit out of my comfort zone but was a good experience. Was harder than i anticipated jumped in the deep end with with the mentee too quickly and probably confused the heck out of them
Read a few pages up to page 65 of the one thing. finished the chapter about distractions and multitasking and how we should avoid it at all costs.
read up to chapter 32.2 of coding career. Really interesting about negotiating your salary and not disclosing your salary when asked by recruiters. I’ve been doing it all wrong!
worked on Whatch and netlify functions seems like overkill but good server-side/serverless coding practice. Feeling really happy coding in Typscript right now. Need to figure out to level up and understand some deeper concepts of it. Other than that I’m feel realy productive. Made good progress with the movie recomendation API endpoint.
useCalculator
and call it a day? need to read some Vue 3 source code to get a better feel for this pattern.