00:00
00:00
Gagangrene
Equine and Eldritch both start with "E"
Profile pic by: https://twitter.com/tater_art

Age 22

None

Colorado

Joined on 10/5/15

Level:
7
Exp Points:
459 / 550
Exp Rank:
> 100,000
Vote Power:
4.90 votes
Art Scouts
1
Rank:
Civilian
Global Rank:
> 100,000
Blams:
0
Saves:
9
B/P Bonus:
0%
Whistle:
Normal
Medals:
100
Supporter:
4y 4m 8d

Gagangrene's News

Posted by Gagangrene - January 27th, 2020


I've been doing a lot of 3D lately.

For one, I've been busy with this model, a premade asset for a game jam.

128 tris exactly. Made with the intention of being placed down hundreds of times, and viewed from above, because the team wanted to make a top-down game no matter what.iu_88707_5523965.jpg

iu_88706_5523965.jpg

All the textures are hastily made while I wait for some better ones. This one's also better than other trees I made because all of the leaves are rotated to face upward in a focused direction, unlike the other trees I've made with nearly random leaf-plane rotations.


I also made this for a different team, same Game Jam.

I call it the Oathog, after those 19th century livestock paintings and that "May I have some oats, brother" meme.

iu_88922_5523965.gif

This thing took a lot more work to put together than the tree of course, which I will go into more detail in another blog post. However, I do want to summarize: I stared with a very, very simple minecraftish-geometry shape.

iu_88708_5523965.jpg

I then used the subdivision surface modifier to make its shape more circular.

iu_88709_5523965.jpg

Still only barely resembles the pig. I then extruded some legs out of this orb-y mass, again making legs that just look like cubes. Also, I cut out a few of the edge loops around the head to make its head more proportionally complex and head-shaped, as well as giving it some basic ears that were really just bent and squished pyramids. Then I used Subdivision Surface a second time to smoothen out the geometry again, getting almost to where the model at the beginning is. I decimated the said model angularly, and later added one more set of edges for animation, and that's the end of the story for the topology of this model. Except I diverged during this last step, making a second copy with the subdivision surface cuts cranked up way past a reasonable count. I used Blender's sculpting tools to etch out eyes, nostrils, a mouth, and a few fat folds around the body, and then with this high poly mesh I baked a normal map onto the low poly.

High poly:

iu_88710_5523965.jpg

Low poly with normal map:

iu_88711_5523965.jpg

:D

Giving it an armature was interesting. I wanted the legs to barely waddle as if it's fat was locking up its legs like honey, but at the same time it's very characteristic of animals like this to have well defined shoulders. I ended up giving it more shoulder than foreleg.

iu_88917_5523965.jpg


But wait, there's more!!

:DDD

iu_88919_5523965.jpg

rock wolf

There's a lot going on in this model too. Firstly, the red-dot-eye is just a texture, entirely controlled with a shader graph that uses an empty's object space as a UV map. It's also made symmetrical, one eye mirroring the other, by adding it's coordinates to itself, but it's X value multiplied by negative 1. This incidentally also means that the eye is more cylindrical in reality being a sphere mapped with no, and abut it's not a problem to me because you can only see the eye from one side at a time...iu_88918_5523965.jpg

no wait, this is stupid and way more complicated than it needs to beiu_88920_5523965.jpg

That's better

Okay what else...

Let's go with "simulating imperfection," another thing this model does. This model is for Filly Astray, which is pixel art. 3D used in 2D isn't very deceiving, being how perfect it is. However, Guilty Gear is pretty well known as an exception to this rule, fooling people at least long enough to where their minds are blown in realization. I'm not that familiar with the game, but if I remember correctly they've got their models only interpolating at 12 frames per second, even when the rest of the game is running smooth. Their models have pretty intense polycounts as far as I know, so they can not only get smooth outlines, but simulate imperfection as they outline each vertex. I tried to do a similar thing with this wolf model, subdividing it for extra vertices, then applying noise as a displacement texture, and making that texture worldspace mapped. As a result, the vertices are just slightly offset toward or away from their normal, randomly, seeded with the world their in, so the silhouettes and contours of the model are always look slightly different as it moves, like an animator's humanity in their strokes. However, this also makes the shading appear more noisy, and when it's getting downscaled like it is, I think it's effects are either unnoticeable or backwards. Oops, that's dumb.

Oh, and compositing.

I've learned better ways to composite than DJTHED has taught me last, or at least for this purpose. iu_88921_5523965.png

THIS NODE. REMEMBER THIS NODE. Previously, I would thicken no-AA outlines and silhouettes with the blur filter, then use the color ramp to turn all those grays into full white. This, however, takes a much more direct and predictable path, and is also just simpler. I haven't tried it, but you might also be able to get it to work with Anti-Aliasing on, so woo hoo on that :D

iu_88942_5523965.gif


But wait oh wait wait, there's yet more :D

[THIS IS HIGHLY IRREGULAR]

iu_88943_5523965.jpg

So for yet another game at Warren Tech, my school, there's this guy for a 2D fighting game. We made our game in Unity's 3D space, even if we're going to use orthographic view, it's still 3D. Currently, we're using 2D sprites in After Effects, but wow does that make the art folders massive; Over 1000 sprites. I wanna gut that all out, unfortunately to our poor animator, and instead use a 3D model made to look 2D. The advantages: We can have even smoother animations because the keyframes are stored as positional data and the interpolation is internal, we can make the model break out of the second dimension for animations. I don't have many other comments about this guy, other than putting him together was interesting. Each body part is a separate mesh, just a plane, skewed slightly so it doesn't clip, and manually UV'd. Thankfully Duck, the artist who drew the texture for this rig, is okay that each body part is just slightly disproportionate and offset. This'll be more interesting when it moves.


Tags:

Posted by Gagangrene - December 16th, 2019


Part 2


iu_78569_5523965.png

So we left off with this. This is how we make the shadows stop and start abruptly, like cel shading. Tweak the positioning of these stops to your liking. If you connect this image output directly into the composite, it should look something like this.

iu_78570_5523965.jpg

Now, let's hook this up with the multiply node. It doesn't matter if it's plugged into the top or bottom in this case, since multiply is just math, and multiply is commutative. White is 1, black is 0, shades of grey are fractions. Feed Diffuse Color into the other image input of Multiply, and

iu_78572_5523965.png

Our render result should look something like this, now.

iu_78571_5523965.jpg

Now we can end just about here. Take these two diverging nodes, combine their results together with another blending node, plug it's output into the compositor, and be done.

iu_78573_5523965.jpg

iu_78574_5523965.jpg


Tags:

Posted by Gagangrene - December 16th, 2019


Part 1


So by default, our color-ramp should've made our final composite look pure black. Like I was saying, move the white color stop to the left, and watch as outlines appear.

iu_78395_5523965.jpg

iu_78396_5523965.jpg

And it's important you don't bring it too far to the left, or you'll start getting outlines in places you might not want them in. (Also don't worry, I accidentally moved the hammer and accidentally hit render again, that's not anything to do with the compositor.)

iu_78398_5523965.jpg

Now we have an outline, albeit it's got no anti-aliasing, and looks very pixel-y. One solution to this is instead of using the "Constant" interpolation mode and moving the white stop to the left, we set it back to linear, move the white stop back to the right, and instead move the black color stop from the left to the right.iu_78397_5523965.png

Now the effect is a lot cleaner, with smoother, somewhat-anti-aliased lines.

iu_78399_5523965.jpg

However, this does have a few issues. The Color Ramp is replacing any outline or color or pixels under it's value with pure black, so our "Anti-aliasing" stops prematurely and the outlines are still kinda pixel-y. Just less. That's why I don't recommend this method. And also with that said, I will not be using this method for the rest of the blog.


So the next step is also optional, but I've grown to like it. Our outline is pretty thin, and that can't be changed in the Laplace node or the color ramp node to any significant degree, and it'll look even more miniscule if we render in 4k. So, we're going to blur it to make it bigger, and use a color ramp node again to turn all the shades of grey into full-bright white.

iu_78400_5523965.pngiu_78401_5523965.png

You can reconfigure the X and Y values to your liking in the Blur section, which will scale up the outlines. I imagine you can surmise what the color ramp does, so I will not explain it again. In the end, we should have a nice, thick outline.

But, perhaps you're not satisfied with the thickness of the line, still. It's all got the same weight, and the sharp ends of lines are now very circular and uninteresting.

Before: After:

iu_78522_5523965.pngiu_78523_5523965.png

But luckily, there's a solution to this issue too.

Turn on "Variable Size" in Blur. This means that different parts of the image can be blurred by different amounts.

iu_78524_5523965.png

Next, feed the output of the Laplace node back into "size" input of the blur node.

iu_78525_5523965.png

The outlines will appear thin again, naturally. Turn up the X and Y values for scaling to your liking, I'm choosing 10 to make it clear. Look at that, Variable line quality! Sharp, pointy ends!

iu_78526_5523965.jpg

iu_78527_5523965.png

Oh wait.

OOoooh noooo.

What is that!?

iu_78528_5523965.png

This "Beading" is the result of anti-aliasing, when the approximation of a line has to jump from one row or edge to the next, and the line abruptly gets thicker. I don't like it. Perhaps you don't either. Let's do our best to clear it out. The way I propose the solution is a very simple one. A denoise filter.

iu_78530_5523965.png

iu_78531_5523965.png

Something about this, while so barely noticeable if we were to compare the Laplace node and the denoise output, it's very effective. It doesn't outright remove the problem. bit in the end it's pretty well swept under-the-rug.

iu_78532_5523965.png

Now for me, I think that's enough fooling around with outlines. Time for Shading.

Shove all these nodes out and away, so we have room for some next set of nodes.


Let's start with this:

iu_78533_5523965.png

To get the "Multiply" node, grab the Color>Mix node, and set it to "Multiply" instead of "Mix" once it's down. Mix nodes function like photoshop blending modes.

iu_78534_5523965.png

Feed Diffuse Direct, the light "reflected" into the camera, into the Color Ramp. node. Add color stops either by clicking between the existing stops, or by hitting the "plus" icon. Change the color of the selected stop by clicking on the bar just above the "fac" input. I'm using greys right now, but actual colors will change looks.

iu_78535_5523965.png

Oops, again at the 20 image limit. Guess this is now a 3-part tutorial!


Tags:

Posted by Gagangrene - December 14th, 2019


Firstly, I'd like to attribute a lot of what I'm talking about to the teachings of this video.

https://youtu.be/tI5mtH4mVVc


I'm assuming you've already got some Blender experience, know how to change shading in the 3D viewport, have a model and scene configured, and you just want to shade it uniquely. You need at least Blender 2.81 for this.

So I can turn this:iu_78101_5523965.jpg

Into this:

iu_78575_5523965.jpg

With the compositor:iu_78105_5523965.jpg

And I'm going to teach you how to do it too, assuming you have a model of your own.

Before you use the compositor, there's a few properties about your project you need to change so you have all the tools to make this.

First and foremost, you need to be using the Cycles render engine, and in the compositor editor, we need the compositor to use nodes.

iu_78104_5523965.pngiu_78103_5523965.png

Cycles gives us the most options in the Compositor, of which we will be needing 4, and of which I am using 5.

Also, note that in Render Properties, under the film section, Pixel Filter Subsection, I have the width set to 0.01. This effectively removes Anti-aliasing, the blur on diagonal lines that makes them appear smoother (This is optional, but I have it off because we're using constant color ramps set to constant, and that also won't anti-alias, so I want this whole image to be more cohesive in it's look). "Transparent" is also naturally unchecked, but if checked it will turn the background transparent for our raw render as well as give us an alpha channel to work with in our Render layers node. Speaking of which, let's just look at that for a second:

iu_78106_5523965.png iu_78107_5523965.png

These are our render layers. "Image" is the complete render, Alpha and all the Diffuse layers put together. Alpha is set as described before, and the rest of these options are set here, in Layer Properties:

iu_78108_5523965.png

In Data under the Passes section, we want "Normal" and "Object Index" checked. Normal give us a normal map generated from the render which we will use to create our outline. Object Index, shortened to IndexOB, is a unique channel that can be filtered out to create a black and white silhouette of specific meshes, which we will use to mask shadows and thus change each shadows' colors individually.

In the "light" subsection, we also want Diffuse Direct and Diffuse Color enabled, shortened to DiffDir and DiffCol in the Compositor. Direct is a layer describing just the light a model receives, which we can use to Cel shade. Color is a layer just describing the colors from Materials, no light affecting them.


Now you might be reading through and noticing there's not much changing in the 3D viewport right now. That's perfectly intended. We're also not seeing changes anywhere else, and if you've been following this guide, that's also intended. But to clarify where all this is going, press F12, or look at the top left corner, find "Render," and hit "Render image." You will need to do this every time you open the project, and it typically takes some time to complete when using the Cycles render. This will usually open up a new window to show the render, but you can Find "Edit" and hit "Preferences" to reconfigure this to show the render in an area in Blender.

iu_78109_5523965.png


Now for the fun part, where we actually use the compositor. This comes after that rendering process we just did with F12, so you won't need to re-render the whole scene as you edit the compositor graph.

iu_78110_5523965.png

The Nodegraph should by default look something like the image above. The Render Result in the image editor will display whatever connections are in the Composite node. Knowing this, you can take a different "layer" output from Render Layers, feed it into "Image" of the Composite node by clicking on one of the dots and dragging i, and look to your Render Result to see how the render looks. (To make a connection, click on a colored dot to the right of the node, and drag it to a dot on the left of another node Connections flow from right sides to left sides.)


iu_78111_5523965.pngiu_78112_5523965.jpg

IndexOB won't do anything, right now, but the rest will, and I recommend you experiment and see what they do. Actually, I recommend you feed the output of any your nodes in the process whenever something unexpected or wrong happens. It's great for troubleshooting.


Now, for realizes, we're going to dissect a nodegraph and also explain how to make it. This part of the nodegraph is one way to make the outline, based off of the normal. This method, or at least the way I made it, will not Anti-Alias, but stay tuned if you DO want anti-aliasing.

iu_78380_5523965.jpg

To create a node, go to "Add" in the top left corner. Alternatively, with your cursor over the compositor area, press Shift+A.

iu_78381_5523965.png

We're going to want a Filter Node, first. Set it to Laplace, and connect The Normal output to the Image Input.

iu_78382_5523965.png

And, maybe we should also take the output image from the Filter Node, and connect it to the Compositor Image Input, and look at what it does to the render result.

iu_78383_5523965.png

The Render Result should look something like this:iu_78384_5523965.jpg

Maybe it looks useful to you, but you and I both know that this alone isn't going to be useful. It's Edge-detecting EVERYTHING. Every edge is traced to some degree.

iu_78385_5523965.png

And that's what the next node is for, to gut out all those unwanted edges. And there's two methods to get rid of it.

iu_78386_5523965.png

We need the Color Ramp Node for both methods.

iu_78387_5523965.png

Where the methods diverge is how we use the node. For this first method, we're not going to be doing any anti-aliasing and removing the colors from the Laplace output. Change the interpolation method from "Linear" to "Constant,"

and hold-click the little white box at the rightmost edge of the value scale, and drag it towards the left. As you drag it to the left, edges should start appearing as white.


(Uh oh, Newgrounds won't allow a 21st image on this post so time for another blog post.)


Part 2


Tags:

Posted by Gagangrene - November 21st, 2019


I will elaborate on this video later.


Posted by Gagangrene - November 21st, 2019


End result

Here's what the fun box looked like in the end:

iu_71481_5523965.jpg

iu_71482_5523965.jpg

We got 3rd place.


Unfortunately this kinda felt like an end-of-the-year-post-finals school project, made with no passion or even a feeling of consequence. Despite 3rd place, I'm not particularly proud of it.


Day 2:

I walked into our room, and was immediately greeted with the scent and sight of acrylic paint. FINALLY: Something to do. We spent three hours of the day decorating the box. I painted the toothy maw and the mosquito. There wasn't any considerable substance to this day, though I learned why we paint acrylic on canvas and not cardboard: It has a tendency to flake off smooth surfaces. Also yellow and purple mix into a nice crimson color that you can see a bit on the outer edge of the maw, but I couldn't make it very bright because of the aforementioned limitations of cardboard and acrylic.

iu_71483_5523965.jpgiu_71484_5523965.jpg

Also, I didn't make it, but I do think the shark is pretty.

While this day was the most enjoyable for me, We ended it with a problem: We still had nothing to put in the box. I took it upon myself to tape some thread tightly around the insides of the spider segment at the end of the day, and the rest of the group members said they would bring gummy worms and grapefruits tomorrow, and the day was over then.


Day 3:

Dawn of the final day

My team members did in fact bring a Grapefruit, PAM cooking oil spray, Silly String, Gummy Bears, and a stress ball. Your mind will refuse to imagine the scent of them all together. I didn't really do anything involving the mystery materials, or closing the box up once they were inside.

Instead, there was a less gross issue that wasn't being dealt with: We couldn't be with the box to present it, so we needed a little printed document near the box to make absolutely sure people would stick their hands into the box. They said I should make it in Google Docs. These are graphic design students? Telling me to work in Google Docs? So I walked over to the nearest 2k dollar computer and made a single page informing students to reach into the box with their sleeves rolled up and to wash their hands after. I printed it, and that was the end of things. There was nothing else to do, or at least that's what the group communicated. I spent another half an hour to myself, typing this up. Also Newgrounds, I love you, but please don't save blogs but then erase everything when I have an image in it. Again, we got 3rd place. Wooooo.


Uh.


Tags:

Posted by Gagangrene - November 18th, 2019


The prompt for our Warren Tech fall 8-hour creative jam, an extension of the idea of a game jam, is announced: "What are you afraid of?" I'm a novice technical artist for games, dealing mostly in the intangible. The other 6 teammates are from graphic design, none of which I'm very familiar with, who deal mostly in the tangible. I'm getting some red flags when these teammates are talking back and forth about food or something, while our announcer explains the rules and eventually the aforementioned prompt.


More red flags as at least 3 of our teammates are clueless to what the announcer just said as we leave the room. For the first 30 minutes, we start trying to come up with an idea. I start by asking everyone the obvious question, "What are you afraid of." I hear spiders three times, needles, rejection, the dark, and I myself say "lost." The next question, "how do we present these ideas?" The idea I have is to make a forest, grimly lit, the trees are spider legs, and perhaps you would navigate through it all through Sketchfab. However this idea wasn't very inclusive and also would overburden me as the only one who could do computer-3D. Another idea was a dream catcher of horror. Not necessarily creative, possibly assimilative, but it would at least let us all put something into it. This idea was thrown out though, because we didn't think we had the resources to do so. Decidedly very okay. However, the idea we settled on, with also the least time spent discussing, was a box of ambiguous insides that you would reach into. This has the same limitation as the last suggestion, but I was sick of sitting on a carpet floor so I just said "okay."


The next step was to make the box. we didn't actually have a cardboard box to work with, so instead we had to use some uniquely thick paper-ish material. Cutting it all apart took about 3 pairs of hands, but I wasn't very keen on it out of the seven of us, so I waited. This is a re-occurring and constant theme for the rest of the day, only breaking for a moment when I get the opportunity to cut a hole in the top of the box. Then back to the rest of the day. I just sat around and watched. The box was also spray-painted, but I didn't really participate in that either, because again it didn't demand all hands on deck. 45 minutes before "class" got out for the day, we were done with the box. Then the next course of action was to... sit around and do nothing. Poking around on my phone and showing a funny pufferfish video was tempting, but I suppressed the urge, again doing nothing. A mentor comes in and reminds us that we've got thousands of dollars worth of equipment and work blogs like this one,and we just head over to our home rooms to write these blogs.


Uh.


Posted by Gagangrene - November 3rd, 2019


I plan on doing this more frequently.


So for the past 3-4 weeks I've been learning how to use Shader Nodes in Blender to make more efficient materials, and assets for a Unity game.


iu_66687_5523965.jpg


This is the code for a burlap texture I made on my own.


So I started learning this on I think... The 7th, Monday the 7th of October?

H2} Day 1, I was looking at my Game Development teacher's work.

One of the things he was showing off was a set of nodes in Unreal that distorted his wood texture in a certain pattern so that the texture stopped and started as if the wood texture was several boards. I thought: That looks cool, I should do that! So I did. Or at least, I tried.


I already had a wood texture:


iu_66689_5523965.jpg


And I had seen an image of the boards as different shades of grey in the graph, and I tried to mimic that with the red channel in this image that I'm going to call the offset (The other channels have been hidden for clarity):


iu_66688_5523965.png


I knew I needed a second texture node, a texture coordinate node, and an RGB separator node, and thought I needed a mapping node, but had not a clue where to go from there. I was getting something similar to this:


iu_66690_5523965.jpg


I didn't really know what I was looking at, although it looked like a broad approximation of what I was going for. I couldn't understand it any further, so I went to my teacher for help. The two big issues were that I was using the wrong nodes for math: 


the mapping node was vestigial, when I was making this and also now in my burlap code, as I have only have only recently found. Instead I needed a vector math node for the red channel to modify the UV, and also could use a scalar math node to multiply the red channel to increase/decrease the offset to my liking.

I had my image's texture set to linearly interpolate color between pixels, combined with a minuscule image size of 16x16, which is why all the rectangles in here were so round around the edges.

I was feeding my wood texture a red channel for a UV. Not exactly sure why the material responds to the problem like this, but textures seem to only like to be fed UV coordinates.


Anyways, me and my teacher fixed this problem. Firstly, a vector math node was implemented. Instead of feeding the red channel directly into the texture, we'd feed it into an "add" vector node with the UV, and use the output of that for the texture. We also changed the offset texture's interpolation to "nearest," which made it's effects hard and integer-ish, as intended. The results aren't as obvious, but hopefully you can still see them:


iu_66692_5523965.jpg


Next, I wanted to make a brick texture.

This time on the 8th and 9th of October.

I started with a similar approach. First, I made my brick texture in Photoshop, like this, though it did look a little different at first and also had a supplementary normal map:


iu_66691_5523965.jpg


Next, I made a miniscule square offset image with two different tones separated equally on the top and bottom. I tried to get the offset to loop once for every two bricks, and have it offset every other brick twice as much. I was stumped when I found the UVs were unitary instead of pixel based, meaning that the UV would fit a square as equally as it would fit this rectangle of a brick. I wanted to find a way to scale the UV on one axis with nodes so I could fix this issue, but didn't yet understand how so I went to my teacher again. Unfortunately this was about the end of the day so I didn't get the answer until tomorrow. The next day, we designed an even-odd function so that every other unit integer would be offset by a half a unit to the right.


However, the greatest thing I've made so far is this aforementioned burlap texture:


iu_66693_5523965.jpg


And all with these 3 textures no less, THESE ARE THE ACTUAL SIZE.

iu_66695_5523965.pngiu_66694_5523965.pngiu_66696_5523965.png


By default, the burlap pad's UV is just one big square that stretches to each corner of the unit. The shader scales the UV up by 64, inversely scaling down the texture to 1/64²th of the size of the burlap pad. It loops of course, so this itself isn't an issue. HOWEVER, this alone does make the material appear awfully grid-like, and that's not to be desired.


So next, I made the weird greenish-yellowish image on the left. It's not fed the rescaling nodes like the burlap texture is, so it's 1:1 with the UV still. It's green channel is used to distort the UVs further, as well as adjust the saturation of the texture. The red channel affects the value, which is what creates that gross stain in the middle. This combined, I think, does a good job of shrouding the mundane-ness of the actual burlap texture.


The next week and a half was spent putting these textures and shaders onto some modular assets in blender for the Unity game, nothing very interesting.

I did have to spend a bit of time trying to figure out how I needed to scale the UVs so that the mortar of my brick texture started and ended nicely, that the assets looped in multiple directions, and that the materials weren't stretched at all. Scaling all the individual vertical faces by 11/12 (The 11 comes from the height of the wall, the 12 comes from a 6 and a 2 in the scaling in the shader nodes), and then scaling them again when it's the inside of a doorframe and 11/12 doesn't fit properly. fun stuff.


Getting them into Unity on the week of Halloween gets interesting again.

Unity doesn't come with a shader graph by default, instead with an addon shader graph package. I get that into Unity. After reading a bit of the documentation on the shader graph, I make a shader with it. Immediately, there's a red error symbol next to the master node saying that the shader's "not compatible with the current render pipeline," and I ignore it, and try to make a simple shader anyways. Then I give a material the shader, and the material appears as pure, blazing magenta. Okay, so I guess I need to fix this error now, it's not because it's missing something inside the shader itself.

So I start asking the programmers for our game, the most experienced with Unity, "Do you know what a render pipeline is?"

"No."

Okay, time to learn about render pipelines, here we go!

Unity's documentation starts to get disheveled at this point, linking to pages that don't currently exist and only ever speaking as if trying to sell me something. Eventually I figure out that I need the lightweight render pipeline addon to use the shader graph. I get that. Nothing happens of course. The document I'm looking assumes I've already done a few steps that I didn't, and it gets me no-where. Then the weekend comes, and my eyes are just kinda cleared with nothingness as I take a break from Unity. I come back, and start digging through the documents with searches instead of links. FINALLY, some answers are in sight. Apparently I needed to make a Render Pipeline asset, and then get the project's graphics settings to reference that asset when considering rendering, and only then would the custom shaders I made even render properly. Once I got that sorted out, I could finally get bricks completely into Unity. I don't actually have pictures for any of this because I'm writing this away from the work computer, sorry.


Also, I don't really know how to end this. I intend to post these blogs more frequently on a weekly basis, though not nearly as long as this. See you next time, I guess.


Tags:

Posted by Gagangrene - October 8th, 2019


Check this out, since it's part of a set and I don't want to dedicate a single artwork to it.

iu_60495_5523965.jpg


Posted by Gagangrene - April 20th, 2019


I expected the Vile engineer to beat all of my 2019 posts on day 1. It's the worst performing one yet. I feel humbled.