A handful of scenes aren’t supported in this old version (namely the OpenPBR examples and the distance shader example)
Since Houdini 20, OCIO is the default colour manager. This means in Houdini 19.5 (which does not natively use OCIO), your scenes will look different in the viewport.
A few weeks ago I finished the SideFX Mardini Challenge, a month long marathon to submit a new Houdini artifact daily. Each submission was developed and rendered in less than a day – the same as last year’s HOULY challenge (you can find these submissions in my portfolio page).
Now I finally have some time to upload the scene files, provide some insight into the most challenging entries and develop my favourite ideas further over the coming weeks. Expect more detailed posts soon!
The ‘Droste’ effect is used to produce Escher style results, as explained here
Current tests already give really cool results:
To achieve this I simply distort the rays that the physical (now not so physical) lens shader produces, which is less efficient than just applying this as a post process in comp. However, there are benefits to seeing the effect in the Karma viewport as you work.
Setup
I’ve tidied up the scene file a little and made it self contained (no external .vfl files), so hopefully you can dig into it.
I had issues getting a prebuilt version of OpenEXR working properly in my project so I decided to just build it myself, I ran into a couple of issues so I’ve quickly summarised them here incase anyone else has the same problem. In my case, I needed it to work on Windows 7 in Visual Studio 2013.
IlmBase Libs
IlmImf and other projects will complain about an error like this:
error MSB6006: "cmd.exe" exited with code -1073741515.
As explained in this Stack Overflow post, you have to manually copy the IlmBase libs (Half, IlmImf, Imath etc) into the OpenExr release/debug build directory to successfully build.
Zlib unistd.h error
After I fixed that, I then got compile errors from within Zlib related to unistd.h being missing on Windows. Again, Stack Overflow saved my bacon and explained that you need to edit line 287 in zconf.h, changing
#if 1
to
#if HAVE_UNISTD_H
After this I managed to build it properly and use it in my project.
I recently ran into an issue where I’d like to use a piece of geometry as a sky dome, seen through a glass window. This accidentally disabled my sun directional light, as it turns out the sky geometry was blocking it.
My usual fix was to add the “Indirect Lighting” attribute, but in this case it also prevented the rays refracted through the glass from reaching the sky dome geometry. After a bit of resarch I found https://renderman.pixar.com/view/TGT_REYES_Lighting, which explains Trace Sets as being the workaround to fix this.
The article already explains it well, but this is how I applied it in my case:
Create a set with the geometry you’d like to not cast shadow
In the attribute editor, make sure the set is selected and add the trace set attribute via Renderman >> Mark as Trace Group
Now select the light you’d like to ignore this object, go to it’s Shadows tab and choose the trace set you just set up
I’ve spent the past few months developing a Optix based fractal renderer for my Major project team (as well as a little pipeline), it’s still unfinished but in the end I hope it has at least some of the following features (not all of them are done yet though):
Editable scene compiled at runtime into a hit function based on a node graph (the runtime compilation/ptx patching is done, but the node graph still needs work and I’ve fallen back to hard coding the scene description for now). Although the node approach likely won’t be 100% as controllable as writing all the clever scene description stuff by hand (like Shadertoy), it would allow me to quickly patch together a cool looking scene.
Passing in data from GUI controls without recompiling the whole hit function, maybe adding in the ability to keyframe values or derive their values per frame from Disney’s SeExpressions.
Path Traced Lighting (Done, but looks very odd on my fractal geo, possibly related to me not moving ‘out’ when I hit before firing a shadow ray)
Exporting to .exr with handy render passes (Done, OpenImageIO is great for this) – Here’s the output (my normal channel ends up a bit funky, likely related to the same issue that is causing the broken path tracing)
Current State
Although there’s still much to be done (outputting animating fractals for instance), this is the type of fractal I can currently render (with a little comp, thanks to the handy render passes):
Movement Test
The fractal in my major piece has to be moving, specifically some form of tunnel. Given the current state of the renderer, I had to do a separate test just to see how the fractals could move.
A Klein Bottle can be thought of as a 3D Möbius strip, a surface with no interior that produces a shape that looks like an odd bottle. I don’t claim to be an expert at generating such things, but Paul Bourke is and you can read about it (and loads of other cool shapes) in more detail on his site.
Generating geometry in Houdini Python
While it seems possible to generate geometry in Vex, I decided to see how to do it with Python as it gives me an excuse to dabble in the Houdini Python API and I think it fits this type of project better. The bible for anything Houdini Python is the documentation itself, so I started looking at the examples there.
The main thing I discovered was that you can only generate geometry is limited to the Python SOP context, so a regular Python node will not work. You will need to make a new node using File >> New Operator Type >> Python Type, then set the Network Type to Geometry Operator.
Once all that’s set up it’s very easy to create geometry, simply use something like:
Although I had to write some extra code to generate polygons properly, generating the points of the Klein was easy to do in a loop and pretty fast until I set it to high point counts.
Adding attributes to new points
The only other issue I had was adding attributes to points etc, I got errors related to the attribute not existing on the point. I eventually found an (obvious in hindsight) solution to this, you simply have to add the attribute to the geometry first and then you can set the value on the point:
nrmAttrib = geo.addAttrib(hou.attribType.Point, "N", (0.0, 0.0, 0.0))
for i in range(0, N):
for j in range(0, N):
u = umin + i * (umax - umin) / float(N)
v = vmin + j * (vmax - vmin) / float(N)
p = eval(u,v)
p0 = geo.createPoint()
p0.setPosition( p )
p0.setAttribValue(nrmAttrib, calcNormal( eval(u, v), eval(u + delta, v), eval(u, v + delta)) )
The result
Here’s the final Klein geometry rendered out in Mantra with a glass shader, it’s missing a section because of the way I’m generating faces for it and I lack the time to debug it, but I think it makes it more unique than the generic klein bottle anyway.
I’ve been looking into using Nvidia Optix for my Fractal lookdev tool, as I need realtime feedback while traversing the landscape.
I really like the API so far, it even comes with a Julia example demo in the SDK! I’ve modified it to render the Mandelbulb instead, with little trouble (and with refraction).
However, I needed to read the buffer data to write .exr’s to disk (this is a vital feature, better to make sure it’s viable at the beginning of the project). That shouldn’t be a problem, as I can easily grab the buffer’s device pointer and then cudaMemcpy it over.
All I need to do is include and , no big deal right?
Linking CXX executable ../bin/julia
CMakeFiles/julia.dir/julia.cpp.o: In function `AnimCamera::apply(optix::Handle<optix::ContextObj>)':
/home/tom/src/optix/SDK/julia/julia.cpp:151: undefined reference to `PinholeCamera::PinholeCamera(float3, float3, float3, float, float, PinholeCamera::AspectRatioMode)'
/home/tom/src/optix/SDK/julia/julia.cpp:153: undefined reference to `PinholeCamera::getEyeUVW(float3&, float3&, float3&, float3&)'
collect2: error: ld returned 1 exit status
Oh.
Well maybe it’s because I included them before OptiX.
#include <optixu/optixpp_namespace.h>
#include <optixu/optixu_math_namespace.h>
#include <cuda.h>
#include <cuda_runtime.h>
using namespace optix;
/usr/local/cuda-7.0/include/cuda_runtime_api.h:257:17: error: ‘cudaError_t’ does not name a type
extern __host__ cudaError_t CUDARTAPI cudaDeviceReset(void);
^
/usr/local/cuda-7.0/include/cuda_runtime_api.h:274:36: error: ‘cudaError_t’ does not name a type
extern __host__ __cudart_builtin__ cudaError_t CUDARTAPI cudaDeviceSynchronize(void);
^
/usr/local/cuda-7.0/include/cuda_runtime_api.h:349:17: error: ‘cudaError_t’ does not name a type
extern __host__ cudaError_t CUDARTAPI cudaDeviceSetLimit(enum cudaLimit limit, size_t value);
^
...
/home/tom/src/optix/SDK/julia/julia.cpp:299:95: error: ‘cudaMemcpy’ was not declared in this scope
cudaMemcpy( (void*)h_ptr, (void*)d_ptr, sizeof(float) * totalPixels, cudaMemcpyDeviceToHost );
Oh.
This is dire, could it be related to the optix namespace somehow?
Linking CXX executable ../bin/julia
[100%] Built target tutorial
Success! Looks blindly using the optix namespace causes clashes with Cuda, setting it up like I did above fixes that in a janky way. Maybe this is mentioned in the documentation somewhere but I never found any reference to it.
So if you’re getting undefined references when you need cudaMemcpy in an OptiX project, check out your include order and namespace setup (and don’t forget to add ${CUDA_LIBRARIES} to target_link_libraries in the cmake configuration if it’s not there already).
The best solution would be to only bring in the stuff you actually need from the optix namespace, so there’s no namespace clashing at all.
So far OptiX seems pretty solid though, I hope I’ll have fun with it!
Update
Turns out in this particular instance I was better off using the map() and unmap() functions within the Optix Buffer class instead of cudaMemcpy(), they remove the dependency of including the CUDA headers. However, I would no doubt have run into the same problems using something like the NVRTC headers.
As an excuse to play with WebGL I decided to finally got around to playing with the Mandelbrot and Julia set. There’s a live demo at the end of the post, the zoom control/mouse movement needs some work but the basic idea works great.
This one is very similar to the code in Ozone3D’s tutorial, I basically copied it for the most part (there’s only so many ways to skin a cat anyway).
The most notable aspect is MAX_ITER, which I replace with the actual iteration value before I compile the shader code. This was to get around the limitation of most webgl implementations which disallow you from using non-constant loop conditions, as they will be unrolled. This isn’t very convenient for real time display of any possible iteration size, so I simply recompile the shader when the iteration size changes.
Note that the “uniform vec2 c” uniform isn’t used here, it’s just there to keep the uniform layout identical for both shaders (I’m lazy and it’s just a demo).
Finally, I use a 1D texture (passed in through a 2D sampler, but the actual texture is only 1 pixel tall) to lookup the expected colour based on the iteration level.
The base code is identical to the Mandelbrot shader, except now we can pass in the desired complex parameter through c.
I can’t explain the maths as well as the pages I referenced earlier; but the main change is now the smooth UVs of the fullscreen quad are now assigned to Z instead of C, as the complex parameter is now a user defined constant and Z varies.
Use the zoom control to zoom in (this isn’t ideal).
Expect slow down on high iterations.
If you want to tinker around with the code then I apologise for how messy it is, it’s my very first adventure into JavaScript and I kept bolting more stuff onto it until I ended up with this.
ToDo
The Ozone3D page mentions that “a better approach is to use a multipass algorithm.”, which is next in line for this project.
It would also be cool to choose a complex parameter for the Julia set by clicking on the Mandelbrot itself, as both sets are connected.