Imran's personal blog

February 20, 2024

Godot 4 Pinned Notes

Filed under: Uncategorized — ipeerbhai @ 4:19 pm
Tags: , , , ,

These notes are to either handle Godot 4’s language changes, or to correct wrong knowledge.

Godot 4.2 language changes

  • Scene unique variables.
    Variables can be set unique in a scene to refer to Godot controls. This is done by using the % sign. So, if I make a any control in the editor and set the unique name property, I can then access that node by prefixing % to the name in GDScript. Example:
    – set it: a VBoxControl unique name to %fubar in the editor.
    – use it: %fubar.add_child(checkbox)
  • .connect API change.
    The node.connect API has changed. Previously, the 3 argument API was .connect(signal_name, object, handler_name) where both signal_name and handler_name were strings representing the signal to subscribe to and the handler, respectively. Now, the API has 2 arguments, the signal_name, and the function in the object instance to call. Example:
    old: http_request.connect(“request_completed”, self, “_on_request_completed”)
    new: http_request.connect(“request_completed”, self._on_request_completed)
  • parse_json is deprecated. Use the JSON class instead. Example:
    var my_json = JSON.new()
    var foo = my_json.parse_string{json_string)
  • Reference base class renamed to RefCounted
  • rect_min_size control property is deprecated.

GDScript Error Corrections:

  • Signals expose a connect function with parameter binding to the receiver. Here’s an example with checkbox:
    # Create a checkbox, connect the pressed signal to a receiver, and send a parameter to the receiver instance
    var receiver_object = Receiver.new()
    var send_parameter = 1
    var checkbox = CheckBox.new()
    checkbox.pressed.connect(reciever_object.function.bind(send_parameter))
  • Callbacks must be of type Callable, and called via .call.
    Other call syntax is incorrect. If I want to bind a single function callback, I have to do it like this:
    func use_callback(callback: Callable):
    var my_param = 1
    var output = callback.call(my_param)


November 4, 2023

IML Animation Example

Filed under: Uncategorized — ipeerbhai @ 9:26 pm

Here is an example of how to use the classes and namespaces to create a scene, set lights, insert a humanoid character, generate a 2d background, and animate the human character.
###

# AnimateHumanoid.py is a python script that animates a humanoid character.
import os
import sys
import platform

from IML.WorldUtilities import WorldUtilities
from IML.CharacterUtilities import Humanoid
from IML.TimeKeys import TimeKeys
from IML.CameraUtilities import CameraUtilites
from IML.LightUtilities import *
from IML.MeshPrimitives import MeshPrimitives
from IML.TextureUtilities import TextureUtilities
from IML.StableDiffusion import StableDiffusion

# instance all utility classes
p = MeshPrimitives()
w = WorldUtilities()
t = TextureUtilities()
tk = TimeKeys()
camUtils = CameraUtilites()
lightUtils = LightUtilities()
sd = StableDiffusion()
satoshi = Humanoid()

## main area
w.ClearScene()
targetColor = “#ff0000” # red

## Prepare lights
charLight = lightUtils.AddLight(LightType.AREA, “Character Main Light”) # character light
lightUtils.ColorLightByHexCode(charLight, targetColor)

## move the character light so that the character is lit with a red light from above and in front
w.MoveTo(charLight, (0, -1, 2))
w.RotateTo(charLight, (45, 0, 0))

## background painting lights
paintingLight1 = lightUtils.AddLight(LightType.AREA, “Painting Light 1”)
paintingLight2 = lightUtils.AddLight(LightType.AREA, “Painting Light 2”)
paintingLight3 = lightUtils.AddLight(LightType.AREA, “Painting Light 3”)

## rotate and move the painting lights so they are in front of the background painting that is yet to be generated
w.RotateTo(paintingLight1, (45, 0, 0))
w.RotateTo(paintingLight2, (45, 0, 0))
w.RotateTo(paintingLight3, (45, 0, 0))
w.MoveTo(paintingLight1, (-1, 0.5, 2.5))
w.MoveTo(paintingLight2, (0, 0.5, 2.5))
w.MoveTo(paintingLight3, (1, 0.5, 2.5))

## add a camera, position/rotate it, and set it as the render camera
camera = camUtils.AddCamera()
camUtils.SetRenderCamera(camera)
w.MoveTo(camera, (0, -5, 0.8))
w.RotateTo(camera, (90, 0, 0))

## add some colored planes to the world for placing the character and having a background
wallSize = 2.5
floorPlane = p.Plane(10)
wallPlane = p.Plane(wallSize)
w.MoveTo(wallPlane, (0, wallSize/2, wallSize/2))
w.RotateTo(wallPlane, (-90, 0, 0))

## create a material and shader for the floor plane and add a default image texture to the material
floorMaterial, floorShader = t.CreatePrincipledMaterialForWorldObject(floorPlane)
t.AddMaterialToObject(floorPlane, floorMaterial)
rgbaFloorImageAsNode = t.AddBlankImageTextureToShader(floorMaterial, floorShader, targetColor)

## create a material and shader for the wall plane
wallMaterial, wallShader = t.CreatePrincipledMaterialForWorldObject(wallPlane)
t.AddMaterialToObject(wallPlane, wallMaterial)
rgbaWallImageAsNode = t.AddBlankImageTextureToShader(wallMaterial, wallShader, targetColor)

## generate a picture from stable diffusion and use it as the wall image.
wallImage = sd.CreatePicture(“A fairy forest at golden hour, 3d rendering, digital art”)
rgbaWallImageAsNode.image.pixels = t.ConvertPILImageToRGBAArray(wallImage)

## Load the “Satoshi” character, grab his rig, then enter pose mode.
(satoshi_collection, status) = satoshi.LoadFromStandardLibrary(“Satoshi”)
satoshiRig = satoshi.GetSavedRig()
w.SetPoseMode(satoshiRig)

## pose the satoshi character at different times to do a little dance
satoshi.RecordPoseByName(“TorsoStraight”, tk.GetFrameForTime(0)) ## straight ahead, arms down.
satoshi.RecordFrameByPoseDict({“torso.RotateLeftRight”:-0.2, “torso.MoveLeftRight”: 0.2 }, tk.GetFrameForTime(0.5)) ## rotate torso right and bend left
satoshi.RecordFrameByPoseDict({“torso.RotateLeftRight”:-0.4}, tk.GetFrameForTime(1.0)) ## rotoate torso further
satoshi.RecordFrameByPoseDict({“torso.RotateLeftRight”:0.2, “torso.MoveLeftRight”: -0.2 }, tk.GetFrameForTime(1.5)) ## now rotate to the left, bend right
satoshi.RecordFrameByPoseDict({“torso.RotateLeftRight”:0.0, “torso.MoveLeftRight”: 0.0 }, tk.GetFrameForTime(2.0)) ## rotate torso and bend it back to neutral

## set the timeline to the beginning
tk.MoveToTime(0)

## get the control bone and controls for eyes, torso, etc
eyeControl = satoshi.head.eyes.GetControlBone()
torsoControl = satoshi.torso.GetControl()

## Go to time index 0, set a location keyframe from the eye control bone
tk.MoveToTime(0)
tk.SavePoseByControls([torsoControl, eyeControl], tk.GetTime())

# ## go forward 1 second, move the eye left
tk.MoveToTime(1)
eyeControl.MoveLeftRight(-0.1)
tk.SavePoseByControls([eyeControl], tk.GetTime())

tk.SavePoseBoneLocation(eyeControl.bone, tk.GetTime())

# ## go forward 1 second, move the eye right
tk.MoveToTime(2)
eyeControl.MoveLeftRight(0.1)
tk.SavePoseByControls([eyeControl], tk.GetTime())

# ## go forward 0.1 seconds, rotate the torso left
tk.MoveToTime(2.1)
torsoControl.RotateLeftRight(0.01)
tk.SavePoseByControls([torsoControl], tk.GetTime())

# # go forward a second, move the torso further
tk.MoveToTime(3)
torsoControl.RotateLeftRight(1)
tk.SavePoseByControls([torsoControl], tk.GetTime())

# tk.MoveToTime(4)
torsoControl.RotateLeftRight(-1)
tk.SavePoseByControls([torsoControl], tk.GetTime())

tk.SaveCharacterPose(satoshiRig, tk.GetTime())

tk.SavePoseByControls([torsoControl], tk.GetTime())

# Render the first frame to disk
tk.MoveToTime(0)
w.RenderFrameToDisk(rootPath + “temp/AnimateHumanoid.jpg”)

# Now render the entire animation to disk
end = tk.GetFrameForTime(2.1)
w.RenderAnimationToDisk(rootPath + “temp/AnimateHumanoidAnimation.mp4”, end=end)
w.RenderFrameToDisk(rootPath + “temp/AnimateHumanoid.jpg”)

IML Library API

Filed under: Uncategorized — ipeerbhai @ 9:24 pm

Library Description

IML exposes many namespaces and classes. Here is a rundown of each and its purpose.

CameraUtilities — this class allows for the addition, motion, setting management, and render priority of cameras in the scene. The following functions are available in this class:
– AddCamera(name=”New Camera). This adds a camera to the world named “New Camera” and returns a camera object.
– MoveTo(camera, location=(0,0,0)). This moves camera objects to a specific location in world space.
– RotateTo(camera, rotation=(0,0,0)). This rotates the camera in world space in degrees around X, then Y, then Z.
– SetRenderCamera(camera). This marks the specific camera as the active camera for multi-camera actions.

CharacterUtilities — this is a namespace with different classes which creates a concept of a character. A character can be any mesh that uses bones to change pose positions. Vectors represent the range of motion of a bone, or set of bones. There are two kinds of Characters — human and non-human. Human characters have standard bones for movable parts of a human, such as eyes, arms, legs, lips, or the torso. Non-human characters do not have standard representations, and require a full description of the character’s rig. The following classes and functions are available
– StandardVector. This class defines control motions for humanoid characters. It is not complete, so currently only the following controls are supported:
— torso. This control allows a humanoid to rotate around the hips or bend left and right.
— arms. This control collects the left and right arm, and allows inverse kinematic motions of the arm in a hemisphere around the shoulder.
— head. This control motion of items on the head of a humanoid, including eyes, ears, and mouth.
– Character. This class defines non-human characters, and allows the manipulation of non-human controls.
— GetSavedRig(). This returns the rig object as a path from the character instance.
— GetAndSavePathOfRigFromCollection(collection). This searches a collection for an armature, and returns the world reference to that armature.
— Import(characterFile, collection). This takes a rigged character file and imports the character into a collection.
– ControlBone. This class defines a single bone in the rig as a control. Similar to Blender’s concept of a pose bone. All control bones have named degrees of freedom and a motion amount that is between -1.0 and 1.0. These named degrees of freedom are created by callback functions of the character. This is hard to explain abstractly, so an example is warranted. If there exists a human character with an upper arm bone, this bone has 2 degrees of freedom — up and down being one, with left and right being the other. A control bone can be created, and a named degree of freedom called “MoveUpDown” can be created, then a callback function to move the bone appropriate to the character can be created. This way, the character can move the control bone in a standard manner.
— Clamp(value, min, max). This is a math helper — if the value is outside the bounds, it returns the nearest value inside the -1..1 bound.
— AddCallBack(callBackName, callBackFunc). Control bones maintain a dictionary of named functions to call to perform a move. The most common names can either be “MoveLeftRight” or “MoveUpDown”. The callback will be called with the input value given to the move command. This allows motions to link together.
— AddChildOfConstraint(targetBoneName). Any control bone can be a child of any other bone. This allows rows of bones to move together if a parent bone has moved. A control bone can be a child of exactly one other bone.
— RemoveChildOfConstraint(). Remove the child constraint for the bone.
— _Move(motionChoice, amount). This internal helper function actually moves the bone. motionChoice is the name of the motion, and amount is a number between -1.0 and 1.0.
— MoveLeftRight(amount). This is a shortcut for _Move(“MoveLeftRight”, amount).
— MoveUpDown(amount). This is a shortcut for _Move(“MoveUpDown”, amount).
— GetWorldPosition(). This returns the world cooridinates of the bone. Bones have local positions in an armature, and this maps to world space.
— GetWorldRotation(). Bones have relative rotation to a parent. This returns world rotation as euler rotations.
— SetWorldPosition(position). This moves the bone to a position in world space.
— _GetBoneByName(name). Finds the named bone in an armature and returns the raw bone. In Blender, this returns a pose bone.
— GetControl(). This converts the controlbone to a control type that contains one bone.
– Control. Control is a class to animate arrays of both controls and control bones. It is self-referential — that is, a control can contain controls. All controls must expose a MoveLeftRight and a MoveUpDown callback. All controls must eventually base to control bones. A control will hold an array of controls and an array of control bones. The idea being that eventually, controls contain only control bones. The bones are then called to move by an amount, with the amount scaled in each call. The scaling factor is decided when a control is created.
— AddControl(controls, controlOrder). Adds a control to move in a specific order in the list of controls. It will replace controls given the same order slot.
— AddControlBone(controlBone, leftRightScale, upDownScale, forwardBackwardScale, leftRightRotateScale, upDownRotateScale, forwardBackwardRotateScale). Adds a control bone and sets up scale factors for the amount. When a Control is called with a motion request, the motion functions are given an amount. This then calls the control bones in the order specified, with the scale factors specified. This as the ability to dampen or amplify a motion in the call chain.
— There are various Move functions that all have similar signatures and do similar actions, as specified by the function names. Each takes an amount of motion to pass to all children controls. These are MoveLeftRight(amount), MoveUpDown(amount), and RotateLeftRight(amount). These iterate the controls, then the control bones, calling into each objects _Move function with the named motion and the scaled amount.
– Humanoid is a class that derives from character, and defines a hierarchy of classes in Python. The hierarchy is: Humanoid –> Head | Torso | Arms | Legs –> Eyes | Ears | Mouth. So, it is possible to perform an action like human.Head.Eyes.MoveUpDown(1.0). For items that are paired in humans, such as eyes or arms, a left/right is available. So, human.Head.Eyes.left.MoveUpDown(1.0) would move only the left eye to look up. Items with a left/right have a GetChirality() function that return is this is the left or right item of the type. Special functions exist for moving elbows, as these are forward kinematic controls, while hands are inverse. To move a hand, you specify the motion in world space, and the IK solver will move the other bones. To move an elbow, a special ElbowMoveUpDown(amount) exists that performs forward kinematics. Special functions on the humanoid class can be used to get the length of the arm as well as the displacement of a hand in the current pose. This allows determining how much of the range of motion is left prior to executing a move. Notable functions include:
— LoadFromStandardLibrary(characterName). This loads a fully rigged character from a pre-supplied library of characters. It returns a type of Humanoid.
— GetAllUsedControls(). This returns all controls that have been modified since the character loaded, or None if the character has never changed poses.
— GetPoseByName(self, poseName). This creates a standard vector for the hierarchy of controls that defines a human, all set to specific values. Available pose names are “Neutral”, which is arms down looking straight ahead with feet together, “T-Pose” which is arms spread wide, “TorsoStraight”, which is a relaxed version of Neutral with slight outward swing of the arms.
— RecordPoseByName(poseName, frame). This uses a named pose to record that pose to a specific frame in the animation keyframes.

CollectionHelpers — this class helps manage the scene/world by putting objects into groups called collections. You can create a collection with a name, move objects between collections, or get a list of all collections.
– CreateCollectionWithName(name). Create a named collection to contain objects.
– MoveObjectsBetweenCollections(objects, oldCollection, newCollection). Move objects between the old and new collections.
– GetMasterCollection(). All scenes contain a root collection known as the master collection. All objects initially load to this collection, and then are moved to new named collections.

Importers — this class helps managing importing/exporting assets from standard formats such as blender, fbx, or glTF. USD is not currently supported because USD is not available in the version of Rhino used for development/testing, but hopefully will come at a later date.
– ImportGLTF(filePath, fileName). This loads the GLTF files into the master collection.
– ImportOrLinkFromBlendFile(blendFile, objectType, objectName, action). If the action is “Import”, the named object in the blendfile is appended, else it is linked. Use blender’s types ( scene, mesh, etc) and the specific names of the objects being imported.
– ImportHDRorEXRIntoWorld(exrFile, position). Load the exr or HDR file into the world, setting the center of volume to the specified position.

LightUtilities — this class helps manage light. You can add lights, set the color to an HTML hex code, change the shape of the light, or change the intensity of a light.
– AddLight(lightType, name). Lights can be Blender’s standard types of POINT, SUN, SPOT, AREA, HEMI, and Volume.
– ColorLightByHexCode(light, hexCode). Set the color of the light to the specified CSS hex code.

MeshPrimitives — this class creates specific platonic solids. It defines both 2d items like circles or 3d items like spheres. It has the ability to construct circles, cubes, cylinders, empties (aka objects that don’t have real geometry), spheres (both Ico and UV), and planes.
– Circle(radius, location). A circle with specified radius and location on the XY plane.
– Cube(size=[]). A cuboid with specified length, width, and height.
– Cylinder(r, h). A cylinder with radius r and height h.
– Empty(location=()). A blender empty object with location (x, y, z).
– IcoSphere(radius, location). A sphere with radius, and location specified. This mesh has vertices equally spaced thoughout the sphere.
– Plane(size=i). A square plane on the XY plane with the specified dimension
– UVSphere(radius, location). A UV sphere with the radius and location specified. The sphere has layered polar rectangles.

MeshUtilities — this class manages boolean operations, mesh density operations, and distortion operations for meshes.
-IncreaseVertexDensity(mesh, level, render_level). Increase the density of vertices by levels.
– MeshBoolean(main_object, other_object, operator). Perform a boolean operation between two meshes, putting the result back into the main object and deleting the other object. valid operators are “DIFFERENCE”, “UNION”, and “INTERSECT”.

RigUtilities — this class manages rigs apart from movements. It allows the creation of bones, rigs, controls, and snaps.

SkeletonUtilities — this class manages collections of bones that are linked together, head to tail. It allows adding more bones, segmenting bones, selecting a single bone in a list, or deleting entire groups of bones.

StableDiffusion — this class wraps up stable diffusion to use for texturing objects on the fly.

TextureUtilities — this class manages shaders and textures. It allows unwrapping meshes and assigning UVs, creating and managing shaders, and creating and managing textures.

TimeKeys — this class manages time and keyframes. It manages the frame rate, the total length of animation time, adding and subtracting keyframes, and saving specifc bones to the keyframe.

WorldUtilites — this is by far the largest class. It manages what objects are active or selected, adding or removing objects from the scene, enumerating objects in a scene, getting location in world space of objects, getting the parent/children of objects, getting OS filesystem locations for standard assets, manages HDRs, moves objects around, renders images and videos, names objects, changes modes for applications like blender, changes visiblity, moves the skybox, or does basic translate, scale, rotation of objects.

IML

Filed under: Uncategorized — ipeerbhai @ 8:00 am

Background

IML stands for Imran’s mesh library. It’s a python library designed to work with LLMs to generate scenes and geometry. IML itself has implimentations for Blender and Rhino, but each implimentation is at different states of readiness. It doesn’t depend on a specific engine — rather, it is its own abstraction language for mesh/modelling concepts. IML is based on a few different concepts:
– Constructive Solid Geometry. All shapes are made of very few solid primitives. These solid primitives can be added, subtracted, or intersected to create complex shapes. All meshes are quad faced by nature.
– Any object can be translated, scaled, or rotated in world space. This includes meshes, lights, planes, or any other object.
– Mesh Density. Shapes can have the density of vertices changed beyond the minimum needed to define the geometry. We can increase or decrease vertex density on the fly.
– Vertex distortion. All meshes are comprised of vertices, and those vertices can be distorted in various ways. We can translate the vertex along its normal, or free move it in space. We can select vertices by distance to points, so we don’t need to know ahead of time where a vertex is located.
– A standard asset library with a series of helper classes to load assets and prerigged characters.
– Various utility functions to help work with importing files, manage textures, rigs, lights, or time, or the scene itself.

Concept of Operations

IML works in “world space”, and creates a standard Cartesian cubic world with XY bring the ground plane, and Z being the Up/Down direction. While it can be used for modelling objects from primitives, its real strength comes from managing assets in a standard way. It also tries to make 3d space more accessible to developers without a 3d background. For example, lights can be colored with HTML codes, as can simple textures. It uses standard engines underneath — supporting both Rhino and Blender at the moment. This allows developers to choose the underlying tool as best as possible. If you want very accurate models for 3d printing, use Rhino as the underlying tool ( and lose some functionality ) to make very accurate objects. If you want to make fast animations, use Blender as the underlying tool.

November 1, 2023

The Age of Intelligent Applications

Filed under: Uncategorized — ipeerbhai @ 2:47 pm

I’ve been working with LLMs for a long while now. I’m one of those weird people who likes to play with every new technology as it emerges. I’ve played with BlockChain / Cryptocurrency, I’ve played with old school ML, I’ve played with VR, I’ve played with 3d printers, etc… Now-a-days, I’m playing with LLMs.

As I play with LLMs, I’ve done the basic stuff — chat with ChatGPT, tried out Claude, downloaded and ran models from HuggingFace, used vector databases, played with agent-builder frameworks like Semantic Kernel, and of course used copilots and copilot chats. I view us as at the cusp of entering a brand new age of software. I call this new wave of applications, “Intelligent Applications”. I think they’ll be as disruptive to traditional applications as “Distributed Applications” were to monoliths. First, let’s sketch a rough timeline.

The first applications, apps if you will, were hard wired. You literally programmed by moving wires in a connection board, kind of like old telephone switch-boards. Programming was a physical task, and you stood in front of the machine. I’m dropping the mechanical and fixed-circuit as pre-history. I know those ages exist, but I don’t count them because reprogramming on the fly wasn’t really possible without a manufacturing step. After that, you had the age of “virtual wiring” apps — punch cards. Then came the age of assembly and tape reel applications. After that came PCs and the age of kernel-programs ( like the Apple IIe ). From there, we get into the age of console apps (CP/M, Unix, Dos, etc ), where the OS actually did a lot of heavy lifting in the application. From there, we enter the age of event apps ( aka mouse GUI apps ), then we enter into the age of distributed apps ( web apps with a front/back end split ). We are, right now, deep into the age of distributed apps with apps like “Facebook, YouTube, or Google” being the majority of modern app creation. Every age of applications builds on the age before — at the end of the day, some circuit has to switch somewhere to light up a pixel in your monitor.

Today, we are entering the age of intelligent applications. An intelligent application is a distributed application that uses one or more AI technologies as part of the experience. The first of these is the co-pilot, which helps you work with / use a single existing application. A good example is Github copilot, or the various chat assistants built in to search engines. However, I don’t think that copilots will be the emergent applications of this age — I think they’re a transition technology, as the real intelligent applications are getting created at this minute.

So, let’s define intelligent applications. Intelligent applications will perform the work of more than 1 traditional application. They won’t be the copilots, but the drivers of the workflow. We humans will be the copilots, co-working with these intelligent applications and accomplish a task. I think RPA/UI Automation might be the first obvious place these intelligent applications come into existence. We will, very soon, be able to prompt something like “Using Firefox, go to fidelity.com and download a 90 day CSV for account X. Load that account into excel and calculate average yield across my dividend paying stocks”. That prompt is an instruction set for an intelligent application that used FireFox and Excel as tools.

Now, let’s take that scenario to the next disruptive level. There’s no reason that scenario needed firefox or excel. I really just wanted to compute yield across stocks, and the application could have done the work with REST and some simple math. So the next stage of intelligent application will do that — by writing a program to do the workflow on the fly. Suddenly my need for both firefox and excel, in that scenario, are gone.

Here’s another good use case. YouTube has been playing cat and mouse with Ad blockers. An intelligent application could go to youtube, load subscriptions, “watch” the videos, and then download the video. It could then remove all ads/sponsors from the stream to get a clean version of the content, and then on the fly, generate a recommended page you can watch on your couch at night. The cat/mouse game is over at that point — because YouTube did serve ads, and there’s no real way to understand that the user is not a real human.

The industry is not prepared for this. We’re not ready for an age where LLMs author applications on the fly to get and parse data, then present it to a real human after altering it. We’re not ready for the day where spreadsheets like Excel aren’t needed at all — even if you want to see a table display — because an intelligent application can literally create what you want on the fly.

You might tell me that’s nice — but there’s no way ChatGPT can do this now! And you’re right — it can’t. Which is why I say the first wave of intelligent applications are the copilots. Right now, I’m using copilots to write code in languages and frameworks I do not know. It won’t be long until the copilots start writing most of the code, and once that happens, the new age will dawn.

September 24, 2023

Minimal “tiny home” with modern conveniences

Filed under: Uncategorized — ipeerbhai @ 9:12 pm

I’ve been dreaming about off-grid living and grid tie lifestyles. There’s a few products that I think can combine into a neat camper power supply system

For cooking, a single burner induction stove and an air fryer.

Build it like a boat or camper — foam board with fiberglass on an aluminum trailer ( so I guess it would then be a camper )

Hmm…

Foam core building System

Filed under: Uncategorized — ipeerbhai @ 8:20 pm

I saw this youtber, N0MAD, and this video: https://www.youtube.com/watch?v=38DMX7s9-X8

It was so cool. It got me inspired to try and copy his system with some tweaks to build a catio, and maybe some other things in the future.

I’ve been thinking about the problem of lightweight construction, as I’m getting pretty old. Something fast and easy to build in a day, but large. strong, and practical. Something that could be a shed, or an outbuilding, or other home components. Something that looks nice and professional. Something where I need a lot of easy labor, rather than a small amount of hard labor. I’d much rather spend an afternoon cutting foam than lifting plywood.

So, my thoughts — use this guy’s system as much as possible.

Use fiberglass or carbon fiber once in a while, when I need flexural strength. 3D print a small number of internal spacers and glue-in connections.

Maybe even a robot to make cuts. Nichrome wire is cheap, and weighs nothing. But that’s a bridge that’s pretty far.

I’ve also been watching lots of vidoes on Sailboats ( I own a 38′ footer, which scares me a lot cause I’m a sucky sailor. I think I’ve crashed while docking/undocking every time. Every time. I’ve taken sailing lessons! And I do great at them. But the sailboat in lessons is much smaller and has an outboard — my boat is bigger and has an inboard. Not nearly as maneuverable. I wish there was a VR sailing simulator that helped me with current and wind…) The sailboat community has some pretty interesting techniques for repair as well. So, I’m kind of combining idea across the communities.

Tools / Materials

I’m not trying to sell you anything, or at least, not this stuff. I’m happy to sell you some AI consulting time, or help you design SIEMs, software security systems, etc — but this is just an interesting idea to me right now. None of these links are affiliates or anything like that.

  1. A foam cutter.
  2. Glues
  3. Foam
  4. Bonding Primer
  5. Fiberglass mesh screen

Let’s talk finishing.

I’m thinking veneers would be nice. Another youtuber I watch, SpearIt Animal, did a boat project, and used: this supplier for veneers: wisewoodveneer.com. He suggested their 3m peel and stick system. They have a couple walnut choices that I like.

Boaters also use a waterproof epoxy to attach things to the fiberglass hull of their boats. I’m thinking that if I fiberglass over the foam, I too can do this.

So, how do we fiberglass pink foamboard? doesn’t it melt?

Yes, pink foamboard will melt from the chemical reaction of fiberglassing. Boat youtubers to the resccue! https://www.youtube.com/watch?v=dru51u-vLCs

So, 5 coats of interior latex paint will create a heat barrier to stop the fiberglass from melting.

We now have a formula for structural fiberglass / CF on cheap and easy to get foam that we can finish to look like real wood and weigh a fraction of real wood. Very interesting….

March 7, 2023

If You’re a Laid Off Software Engineer

Filed under: Uncategorized — ipeerbhai @ 6:47 pm

So you got the sudden news. The company that you’ve been at, perhaps since you graduated high school, no longer needs your services. It’s emotionally difficult, and financially might be a problem. Here’s my loose thoughts on what to expect and how to recover.

TIME

The biggest thing to expect is time. It takes, on average, 6 months to find a new job. If you’re over the age of 32, it probably takes 18 months. Such is life. Your severance and state unemployment insurance will help — you’ll probably get enough money for your lifestyle to last, unchanged, for about 6-9 months. After that, your finances will start to severely weaken. It’s best to start looking at expenses you can cut immediately. What those are — it’s up to you. But look for monthly expenses you can cut. Don’t cut things that you need to keep your sanity. You’ll have a lot of free time, and you’ll need to fill it.

Finances

I am not a financial advisor. Don’t take financial advice from some random blog post on the internet. My personal opinion is that you should redistribute your investments towards passive income. You’re going to have to last a potentially long time, and passive income extends how long your finances will hold. I’m a fan of equity REITs, covered call funds, and income funds. It is possible to earn about 1% per month of your investment assets as passive income. Some sectors I avoid — mREITs and Nasdaq covered call ETFs, such as NLY or QYLD. These sectors have great dividends and can be good plays in the right time, but I’ve lost money on them. In a rising interest rate environment, mREITs, which play the float between borrowing and lending, lose a lot of book value, thus reducing their dividend safety. Nasdaq volatility also hits cash balances of covered call funds, and the Nasdaq tends to go down in value more so than other indices when interest rates rise. So, I personally would not rebalance into companies/funds in those sectors. I have had unusual good luck with S&P 500 covered call funds, Pimco and Guggenheim income funds, closed end option writing funds, and preferred stocks. Still, if you have $100,000 bucks in investment assets, you can get about $1,000 / mo in passive income. That is not risk free! Again, such is life. Some people augment those investment by writing covered calls or cash covered puts. That requires more expertise than I have — the few times I’ve tried those strategies, I have lost. However, “The Average Joe Investor” channel on YouTube does well with that option strategy — so it is possible to make it work. Take little risks as you rebalance and allow lots of time between moves. This is both for taxes — every sell can incur tax liability — and for risk management. Passive income creates a tax nightmare, and you often trade future growth for short term income. So, you’ll be balancing your immediate income needs against your future wealth.

Buying a business

One thing to seriously consider is buying an existing business using either seller financing or SBA loans. Look for a boring business ( but not a dry cleaner ) with an EBITA free cash flow of at least $100,000 / year. There is a YouTuber named “Codie Sanchez” with lots of advice about this path. I have not tried this path, but would seriously investigate this if I were laid off.

Creating a Startup

Some people create a startup after a layoff. This is a terrible idea. Your odds of success in this case are 1/400 to maybe 10/400. Those are not good odds. It takes, on average, 5-7 years for a startup to figure out how to generate predictable revenue and build a sales/marketing play book. It’s very tempting, especially for an engineer, to “build something” then try to sell it — a recipe for disaster. If you do make a startup, your number 1 problem is “meet people and talk to them” — not “build something”. This may be over YouTube, TikTok, Reddit, or in person. Then, you’ll find that the skills you need to solve the problems you discover are mismatched. The most common skill you really need are either “Full Stack Web Developer”, “Game programmer”, or “AI implementation engineer”. Whatever you were working on before layoff — it likely wasn’t in those three stacks and learning them will take a while. It takes a lot of time, energy, and money to make a startup actually work. It may take more people than just you. It is possible for this path to work — you might make a great startup — but the odds are very against you. If you do decide on this route — do job 1 first — talk to people. Lots of people. You need to have at least 50 real, human conversations before you build something.

Job Hunting

It’s very hard to find a job when you’re unemployed. Step 1 is “Don’t be unemployed”. Start a consulting firm and form an LLC. Employment verification services like “HireRight” look for government forms like a business license and / or W2/1099/K1 forms. Having the LLC and getting “Popcorn revenue” will help you market yourself better and pass employment verification hurdles. Owning a firm may disable your ability to collect unemployment in your state — so check with state laws first. In WA state, you cannot be an officer in a corporation and collect unemployment. This means don’t take titles at your cousin Bob’s moonshot startup with 0 pay — it will impact your unemployment insurance. Instead, check your local laws, form an LLC, and look for “gig coder” platforms like Upwork or rando consulting firms. That’s your best shot at getting an occasional gig, which both helps with revenue, and helps with employment verification when you do land your next W2 role. Do use a payroll processor to pay yourself like Gusto. You need third party paychecks to verify employment, and you can issue yourself a paycheck with these systems whenever you get an offer. Do a few tests first to make sure the system is working. Make sure to issue yourself a paycheck at least 1x per year.

Use LinkedIn. I have had very good luck with LinkedIn when looking for jobs because it lets me jump past HR and whatever AI of the day they are using to screen resumes. Send connection requests like mad to people with titles that you could report to, like “Principal Lead Software Engineer”, and say simply, “Hi <Person>. I was part of a recent big tech layoff wave and am looking for the next step. I know X/Y/Z. Do you know of any open roles or can point me in the right direction?”. Also, make posts and comment on other’s posts. Each post / comment will boost the number of recruiters who see your profile, and increase your inbound recruiter contact rate.

Use AI. Don’t know what to say? Ask chatGPT to help you. It can write your resume. It can write you a cover letter. It can write contact requests. It can write emails. It can even write code. You’ll find it a great resource to help you move forward in your job hunt.

Don’t get stuck on your title. Were you a “Senior Software Engineer”? Don’t apply for only “Senior Software Engineer” roles. This layoff may be a great chance to get your next promo, or change to nearby fields like management. Do keep your mind broad in your search.

Learn Stuff

While I advice against building startups, I do advise for “gig programming”. As you do it, you’ll learn that you don’t know enough. A good place I’ve found to learn new stuff is Udemy. Personally, I think Udemy’s per-course pricing is really good, and you can get good deals on the site. I also like the “pay once” model instead of the SaaS model other sites use.

Emotional Damage

You will take emotional damage. And while it’s hard to see — it’s not the end of the world. Avoid getting stuck, and instead, focus on a single metric like contacts / day or Applications / Day. You can switch the metric from time to time ( for example, in 2 weeks switch between applications / day to contacts / day ), but not too often ( say no more than 1 metric switch per week ). Then drive yourself to improve that metric. Landing on your feet is simply a matter of numbers. You need to have conversations, and some of those conversations will lead someplace.

Good luck — we’re all pulling for you!

July 23, 2022

Blender Python Stuff

Filed under: Uncategorized — ipeerbhai @ 9:40 pm

I’ve been learning Blender. It’s probably just my newb skill level, but the python API is wonky to me. These notes will hopefully help me remember weird stuff. There’s a lot of weird stuff.

Background

Blender has a pretty complex API that seems like an afterthought to the program. Examples of this are in armatures/bones and sculpting at the least. The API is also not consistent — like it was written by different people over time. For example, You can’t just do armature.bones[5].select_set(True) like you can with mesh items. You also can’t do what you want in a sane way like armature.bones[5].link(newBone) — which would make more sense to a programmer. The Blender API is an afterthought to the UI, and it shows all over the place. Sculpting is another example where the API just presumes a viewport and pen. You can sculpt as a programmer — but then you’re manipulating the UI as if you were an artist. You can’t just do something sane like:

brush = … GetBrush(Brushes.Sculpt.Snake)
brush.position = [x, y, z] ## World cooridinates
brush.rotation = [thetaX, thetaY, thetaZ] ## radian rotation around X, Y, Z axis relative to brush center point.
brush.scale = 50 ## diameter of brush circle
brush.TargetObject = myMesh
brush.Stroke(pressureLevel=4, startPosition = brush.FindNearestVertex(), displacement=[x, y, z]

Compared to something like Pixar’s USD API — which seems to have put the programmer first ( but requires the programmer know a lot about concepts from 3d animation ), it’s downright confusing.

Also, Blender presumes pens, not mice, in places, and treats the mouse as a poor quality pen.

Enough grousing — let’s get to listing the API to do things I seem to always forget.

Selecting Objects

Objects in blender have 2 different data format existences, which then span across different modes of existence, called a “context”. For example, a mesh object will exist as both a “scene” data type and a “data” data type . The “data” instance is a child type of the scene type. You can’t call API functions that expect a data object with a scene object — even though they’re they both represent the same physical thing in Blender. It’s a real pain, because both data types have a .name member that you can use to identify the object — but they aren’t the same. The scene object name is the outliner name, while the data name is unique to the blend file. Blender will automatically update the data name if you try and collide it — like if there’s globally already a bone with the data name “leftArm”, then blender will add “.001” to the data name, but not the scene name. When calling search functions for an object by name, this can cause a lot problems, because each search can return a valid but incorrect type for what you want to do. This type system is just a bad choice, and Blender should do away with it. A single data type per object, with a global name and an outline name as properties. Something like Mesh.GlobalName = “Person1.LeftArm”, Mesh.Name = “LeftArm”.

Anyway — back to selection. Here’s a list.

  • To get the scene object use object =bpy.context.scene.objects.get(‘ObjectSceneName’)
  • bpy.context.scene is the same as bpy.data.scenes[index] in some cases, and sometimes is the same as the reference to the data blocks, and may be the better choice to use most times as bpy.context is a variable type and can change members on the fly.
  • Depending on context ( aka, what’s selected, what mode you’re in, what window is loaded, etc ), you can get different types back. The Scene object has a .type member you can query, while the data object does not ( it should — different authors problem visible here! ). See if the .type member is available — if so, you have a scene object and not a data object!

Extruding a single bone at the end of an Armature

When I tried this, I hit so many walls, including extruding a single bone on the tail of *every* bone of the armature, because Armature.bones[index].select_tail = True will select the tail of every bone in the Armature ( even though an index was given. Oh, and it doesn’t work in Edit mode, only if you’re in object mode, do the selection, then enter edit mode. To actually select just the tail of just a specific bone via index, it’s a bit of a hassle. Here’s the basic idea.

  • Start in object mode;
  • Select nothing.
  • Select the Armature
  • Go to edit mode.
  • Select nothing.
  • Get the bone as data object you want
  • Go to object mode.
  • Select the tail in object mode.
  • Go back to edit mode to apply the selection.
  • Extrude move with an orientation matrix

Yes, I’m serious, that’s how you select the tail of a specific bone reliably in Blender. Did I mention the API is nuts? If I wanted to select a Vertex in a mesh, I would use Vertex.select_set(True) in edit mode — but bones, also children members of a higher level object, don’t have a select_set function in edit mode. You can see that the selection system was written by different people at different times, and you can see it was an afterthought right here.

Here’s some sample code from an experiment I have, that seems to work.

## Deselect all, enter object mode
bpy.ops.object.select_all(action='DESELECT')
bpy.ops.object.mode_set()

## select the armature and get a reference to the scene object for it.
armature = bpy.data.scenes[0].objects.get('Armature')
armature.select_set(True)

## Change to edit mode and select nothing.
bpy.ops.object.editmode_toggle()
bpy.ops.armature.select_all(action='DESELECT')

## Get the bone I want to extrude another bone from, and select the tail by edit mode foolishness (IMHO, this is a bug).
lastBone = armature.data.bones[len(armature.data.bones)-1]
bpy.ops.object.editmode_toggle()
lastBone.select_tail = True
bpy.ops.object.editmode_toggle()

## extrude a bone 1 unit vertically constrained on Z from this selected tail
bpy.ops.armature.extrude_move(ARMATURE_OT_extrude={"forked":False}, TRANSFORM_OT_translate={"value":(0, 0, 1), "orient_axis_ortho":'X', "orient_type":'GLOBAL', "orient_matrix":((1, 0, 0), (0, 1, 0), (0, 0, 1)), "orient_matrix_type":'GLOBAL', "constraint_axis":(False, False, True), "mirror":False, "use_proportional_edit":False, "proportional_edit_falloff":'SMOOTH', "proportional_size":1, "use_proportional_connected":False, "use_proportional_projected":False, "snap":False, "snap_target":'CLOSEST', "snap_point":(0, 0, 0), "snap_align":False, "snap_normal":(0, 0, 0), "gpencil_strokes":False, "cursor_transform":False, "texture_space":False, "remove_on_cancel":False, "view2d_edge_pan":False, "release_confirm":False, "use_accurate":False, "use_automerge_and_split":False})

## this extruded bone is now the active object, capture a reference to it before I lose it.
newBone = bpy.context.active_bone


Another issue is bone selection.

Selecting bones in functions

You cannot store a bone object in python reliably in blender. If I do this:

bone = armature.bones[“someBone”]

then update the armature, the bone reference I have may move on me. This is likely a bug in Blender’s internals, and how they manage bone updates. This one was a real doozy to figure out, as it made no sense. I literally watched the bone change after a call to bpy.ops.object.editmode_toggle(). I wouldn’t have believed it otherwise. As a work-around, save the name like so:

boneName = armature.bones[“someBone”].name

You can then use armature.bones[boneName] in functions where you would want to use the bone object.

Adding Pose Constraints to a Bone:

The UI adds pose constraints to a bone by a very complex method. Turns out there’s a simpler method.

  1. The Armature scene object (not the data object) has a .pose member ( I’ll call it scene.X and data.X respectively in this post)
  2. The .pose member has an array of bones ( the same array indices/names as the data.bones array ).
  3. Each bone has an array of .constraints on it. You can call.new on it. Here’s an example:
    1. ikConstraint = armatureAsSceneType.pose.bones[‘Bone.001’].constraints.new(“IK”)
    2. the return of .new operator is the instance of the constraint object, not a status code like other new operators in Blender’s API (remember, there is no consistency in the API. sometimes you get a status, and sometimes you get an object. When do you get each? Who knows. Go uses a tuple convention of <object, error> and stays consistent with it — Blender should also.)
  4. Constraint objects have simple property accessors. You can simply set them. But, you need to know their type. Here’s what I know by example:
    1. ikConstraint.target = armatureAsSceneType ## scene data type
    2. ikConstraint.subtarget = ikBone.name ## string
    3. ikConstraint.use_stretch = False ## boolean
  5. I’m not sure you need to be in Pose mode to create/edit bone constraints. The API looks like it would work in any mode, but many things in Blender will give you internal errors if you call an API in the wrong mode. If you get context.poll() errors, try pose mode.

May 30, 2022

On 3d art

Filed under: Uncategorized — ipeerbhai @ 8:10 pm

A long time ago, I watched Andrew Price’s video on donuts. This was pre blender 2.8, and certainly pre 3.0. Even longer ago, maybe a decade or so, I tried my hand at writing a cad to 3d art pipeline. I kind of succeeded, too. I was able to generate a mesh, get it over to unity, and generate materials.

This started my research into a quest that I’ve been trying to figure out forever — is it possible to make an actual, honest-to-goodness, anime while have 0 skill other than programming?

Is it possible to make something as specific as an anime entirely with code?

The answer to that question, I think is “Yes”. It likely has been since around 2018. But, no-one has put together the pieces quite yet. Integration is the missing component.

So, enter Nvidea with their omniverse. The idea they have is to connect lots of different tools together via an integration system based on Pixar’s Universal Scene Description. USD itself is a programming language based on a mix of Python and OpenSCAD. When you “export” an asset to the USDA variant of USD, you can actually see how it turns the asset into polysurfaces, and how it defines meshes and materials, lights, and maybe even bones.

What are the big tools in the Omniverse suite?

Well it seems the big tool is “Create”, which is a very small content creation tool / library manager. I hesitate to call it content creation — it does have the ability to make meshes and materials, but the ability is small. It’s more like it has the ability to refine and structure imported assets. “Asset Libraries” seem to be the actual base object that it expects artists to start from. This is very different than a tool like Blender, where the starting point is the artist themselves and “Modelling” of some sort. You then put together a pipeline of items to make a project in Omniverse. So, start with an asset library like Kitbash and import the assets into a base scene in Create, add characters from Character Creator, add in a physics engine to help you place objects and have them fall into position, do pose modifications in Unreal, and export a render from Blender. This seems to be the target workflow. Of course, your final tool is up to you. Maybe you want to do all work entirely in Unity — you can.

How is this art?

I guess it’s concept art or marketplace art. It doesn’t have the cohesive vision of a single artwork, but it could enable a wider variety and type of art.

Now that I understand the concepts, can I make an Anime with it?

Well, no. You can maybe get the scene ready and get some script dialog synchronized, but you can’t quite yet make an Anime from it. Still missing — content, shaders, and music. You’d still have to use a secondary shader like Octane to get the Anime cell-shaded look from a 3d scene. Anime has specific weird deformations of character mouths / faces depending on camera angle. One example is “side speech” where the real mouth in a 3d model would have to be orthographically projected to the camera angle, which just looks insane in 3d models. You’d probably need to “get close” via 3d engines, then use an AI to redraw the scene correctly. This AI doesn’t exist, but style transfer is close. It might come to be in the future. AI driven music may already be here — I just don’t know yet. The final problem — you don’t have the models you want for your art. Let’s say I wanted a “rabbit wearing a tuxedo” as a character in my Anime. Well, if I can’t find a textured and rigged 3d model of it already, then I’m out of luck. I’d have to model the rabbit ( or hire someone ) complete with texture and armiture.

What’s the other problems?

Speed. This entire process requires me to know the basics of how to use a lot of different tools. And even if I can code, there isn’t a single language. Unity, for example, uses Blueprints and C++, while Blender uses Python. The AI models aren’t really exported in a pluggable format quite yet. I can’t build a docker container with a lip-sync AI, feed it text and have it output an armiture control vector that I then feed to a head model. I guess NVidea’s Kit SDK might allow that, but then I’d have to learn yet another thing along with all the tools needed to get there.

What about toon boom or other cartoon makers?

These are nice tools for a skilled artist to use. But I’m not a skilled artist. They won’t enable me, the talentless hack, to make an Anime. I’d have to draw all my assets and rig them ( or pay others ), and they’re not really designed for programmers. 3d Art is the one place where Artists rule and coders drool. Technical art really is currently ruled by “the pipeline”. I mean, I’d love to be able to just write this:

import “rabbit.usda” as rabbit

with rabbit:

translate(x, y, z)

rotate(x,y,z)

rabbit.say(“Ehhh…. Whats up doc?”)

rabbit.pose([0, 0, 0, 0, 0, 0, .25, .25]) ## use standard control armiture positions

But you know, that ability doesn’t exist.

It might make more sense to bind the rabbit armiture to control points, then simply get the points and translate those in time-position relative r4 space. Rabbit.ControlPoints[“Left Elbow”].TranslateFromCurrent([x, y, z, t])

Today, what I’d have to to ( assuming a rigged model rabbit exists ) is import the rabbit.usda into blender or Unreal, use either tools workflow to pose the rabbit, and do pose-to-pose transitions for lipsync and movement. While a tool like “Audio2Face” might help with the lip-sync, I still have to move the elbow. Imagine creating “S” curves on a dope sheet for characters interacting — it would take me days/weeks for a simple conversation. A skilled artist could probably do it in a 1/10th the time – but that’s still a lot of work.

And I haven’t even thought of music or voices yet.

Next Page »

Blog at WordPress.com.