Loading…
Transcript

Geometry

Geometry basics

Nodes

Input elements used in Emeshe:

EmesheGeomFeatures node

Standards:

Allows to define Features Easily

  • Prepared GeomFX will know what semantics to tunnel
  • Usually create one for IN semantics and one for OUT semantics
  • Cons the defines together and feed it to gsfx

POSITION

NORMAL

TEXCOORD0

BLENDINDICES

BLENDWEIGHTS

TANGENT

BINORMAL

Emeshe

  • Skinning
  • SkinningMDMA
  • ExpandAlongMotion
  • ExtruderVel
  • Decimator
  • Splines

.xyz: Previous Position

.w: "Instance" ID

COLOR0

*(not used for any kind of color)

have to be rewritten

haven't tested current implementation but it should work

Deferred render engine:

Forward

Materials

CPU Space

GPU Space

DeferredBase

Basics

Using VObjects for management

you don't necessarily have to know about it though

MaterialBuffer node collects data from MaterialManagers and converts them into 3 Buffers:

Most processing done in Screen Space

Base module to draw the scene

Targets:

it has a PN-Triangle tessellation variant

MaterialDictionary

MF_MATERIALMETA

MaterialData

MM

MaterialManager

Additional Features

FeatureOffset

Geometry Features

MF_MATERIALDATA

Managed by #define's under the hood

4xfloat16 (rgb used only)

MF_FEATUREOFFSET

Managed by #define's under the hood

Individual bits indicate whether a feature is known or not

MaterialMeta

{

uint Flags;

uint Address;

uint Size;

}

From SV_InstanceID

or from COLOR0.w

Where the material starts in the MaterialData buffer

FM

...

FeatureManager

FM

  • Each parameter is stored here
  • Parameters can have multiple floats
  • Shader developer should know how many floats they want to get
  • Uint buffer containing where individual features are in MaterialData relative to material offset
  • Each possible feature has its offset, if feature is not used its offset is left 0
  • TexCoords
  • Normal mapping
  • Previous position

...

Tangents

Binormals

  • Instancing
  • Triplanar Projection
  • Write to depth
  • Alphatest

Total size of material in number of floats

  • Feature and parameter names are loaded from a FeatureTable
  • Editing that table can break stuff! :P

float[X] GetFloat[X](uint MatID, uint2 Feature, uint ParamOffset)

From COLOR0.xyz

+ ID from COLOR0.w

4xfloat16 (rgb used only)

}

Decrease performance!

Currently known features:

  • MRE_COLOR
  • MRE_NORMALS
  • MRE_VELOCITY
  • MRE_MATERIAL

MF_LIGHTING_AMBIENT

MF_LIGHTING_PHONG

MF_LIGHTING_PHONG_SPECULARMAP

MF_LIGHTING_COOKTORRANCE

MF_LIGHTING_COOKTORRANCE_SPECULARMAP

MF_LIGHTING_COMPOSITE

MF_LIGHTING_COMPOSITE_MAP

MF_LIGHTING_FAKESSS

MF_LIGHTING_FAKESSS_MAP

MF_LIGHTING_FAKERIMLIGHT

MF_LIGHTING_FAKERIMLIGHT_MAP

MF_LIGHTING_MATCAP

MF_LIGHTING_MATCAP_BLENDMAP

MF_LIGHTING_EMISSION

MF_LIGHTING_EMISSION_MAP

MF_LIGHTING_SHADOWS

MF_GI_SSAO

MF_GI_CSSGI

MF_GI_SSSSS

MF_GI_SSSSS_MAP

MF_REFLECTION_SPHEREMAP

MF_REFLECTION_SSLR

MF_REFLECTION_MAP

MF_REFRACTION_SPHEREMAP

MF_REFRACTION_SSLR

MF_REFRACTION_MAP

MF_CALCULATENORMALS

2xfloat16

Joins the flags from features

Prop Layer

Instancing

  • Feature #define's are in MaterialFeatures.fxh
  • Helper functions for reading materials are in Materials.fxh

Usually for Shadow maps

  • Most of parameters can be controlled from StructuredBuffer
  • Simplest way is InstanceParams node

Materials are referenced by their ID

  • Only output world position
  • And distance from camera

4xuint32:

RG: UV coords (from float)

B: Material ID

A: Object ID

2 ways:

  • Native instancing via SV_InstanceID
  • Read from geometry via COLOR0.w

Resource hog :(

MRE

Lighting

It is the central node

Current models are:

There are plans to implement

Ashikhmin-Shirley

Lookup Table method

Roughness map:

Selected by material from a Texture Array

  • Cook-Torrance:
  • Phong
  • Create semantics for Emeshe effects
  • Calculates viewpos
  • Optionally calculates per-pixel flat normals if material knows that
  • Joins textures describing the scene into GBuffer spread

(material should know Specular Map feature of Cook-Torrance)

for anisotropic model

it will be thrown out though, possibly avoid it

GPU Space

CPU Space

Also calculates

Buffers

Shadows

  • ShadowMap nodes are not spreadable
  • Means each light needs a separate ShadowMap node
  • Shadows are PCSS with irregular sampling

4xfloat16 (rgb used only)

  • Similar concept as materials
  • Much simpler because of fixed parameters
  • Lights are added to the their corresponding Dictionary
  • Every light type has their own set of nodes
  • Simple Structured Buffers
  • Every type of light has its own structure

noisy to dampen banding

Percentage Closer Soft Shadows

Importance shadow management is patchable

MRE_VIEWPOS

MRE_DEPTH

MRE_STENCIL

D24_UNorm_S8_Uint

Components

Textures per component:

Spot

Sun

Point

so you can composite them separately

  • Surprise: only direction needed
  • Shadows not supported yet :(
  • Expressed by view and projection transforms
  • Spotlights are actually projected textures
  • Has the regular parameters
  • Can have a different center position for shadowmaps

referenced from a texture array by Texture ID

  • shadowmap should be camera view and projection dependent
  • Also it should compress far away objects from viewer
  • Didn't find out how to do it yet :(

if multiple light use the same shadowmap

Shadowmaps are generated from cube texture renderer

subject to change!

expensive though because the scene have to be rendered 6 times per light

  • Ambient
  • Diffuse
  • Specular
  • SSS
  • Rim

Compositing

Color Bleeding

Emission

Ambient Occlusion

Reflection/Refraction

based on CSSGI by ArKano22 on gamedev.net

Low frequency:

implementation by UNC

  • Simply add a color
  • Can have a texture (Emission Map feature)
  • that's it :P
  • Environment map based
  • Map selected from material from a Texture Array
  • Stochastic
  • Temporal Reprojected
  • Depth-blurred
  • One light bounce
  • Not 100% accurate but plausable
  • May flicker on camera movement
  • Stochastic method
  • Temporal Reprojected
  • Depth-blurred

}

used together makes a nice crispy look

Light Compositing

High frequency:

SSLAO.tfx

ColorBleedingInDiffuse

  • Classic ancient method
  • May have banding

makes it easy to have it in the diffuse light component

  • Control each light component amount
  • Can be controlled from material
  • Can be also controlled by texture

makes it easy to have it in the diffuse light component

Depth of Field

Plans

Motion Blur

Tonemapping

based on LensBlur TextureFX in DX9

Methods:

Auto Exposure

Can be regional (HDR image effect)

  • Recommended to enhance velocity with VelMapExtend
  • Avoid hard edge artifact
  • Sample count can vary based on velocity but may introduce artifacts
  • Screen-Space reflections
  • Texture based Bokeh
  • MatCap
  • Performance update after NODE15

0.8 blur is the default and recommended

  • Stochastic lens blur
  • Focusing and blurring kinda realistic
  • Simple edge base Area of Confusion (might be replaced with a DF method)
  • Autofocus on a point of the screen (-1..1)
  • Only works after Tonemapping
  • Linear
  • Logarithmic
  • Exponential
  • Drago Logarithmic
  • Reinhard
  • FilmicALU
  • FilmicU2

This is the best!

RGB: amount per channel

A: roughness

RGB: amount per channel

A: roughness