forked from RayTracing/raytracing.github.io
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathRayTracingTheNextWeek.html
2338 lines (1925 loc) · 94.4 KB
/
RayTracingTheNextWeek.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<meta charset="utf-8">
<!-- Markdeep: https://casual-effects.com/markdeep/ -->
**Ray Tracing: The Next Week**
Peter Shirley
Version 2.0.0, 2019-Oct-07
<br>Copyright 2018-2019. Peter Shirley. All rights reserved.
Overview
====================================================================================================
In Ray Tracing in One Weekend, you built a simple brute force path tracer. In this installment we’ll
add textures, volumes (like fog), rectangles, instances, lights, and support for lots of objects
using a BVH. When done, you’ll have a “real” ray tracer.
A heuristic in ray tracing that many people--including me--believe, is that most optimizations
complicate the code without delivering much speedup. What I will do in this mini-book is go with the
simplest approach in each design decision I make. Check https://in1weekend.blogspot.com/ for
readings and references to a more sophisticated approach. However, I strongly encourage you to do no
premature optimization; if it doesn’t show up high in the execution time profile, it doesn’t need
optimization until all the features are supported!
The two hardest parts of this book are the BVH and the Perlin textures. This is why the title
suggests you take a week rather than a weekend for this endeavor. But you can save those for last if
you want a weekend project. Order is not very important for the concepts presented in this book, and
without BVH and Perlin texture you will still get a Cornell Box!
Acknowledgments
---------------
Thanks to Becker for his many helpful comments on the draft and to Matthew Heimlich for spotting a
critical motion blur error. Thanks to Andrew Kensler, Thiago Ize, and Ingo Wald for advice on
ray-AABB tests. Thanks to David Hart and Grue Debry for help with a bunch of the details. Thanks to
Jean Buckley for editing. Thanks to Dan Drummond for code fixes. Thanks to Steve Hollasch and Trevor
David Black for getting the book translated to Markdeep and moved to the web.
Motion Blur
====================================================================================================
When you decided to ray trace, you decided visual quality was worth more run-time. In your fuzzy
reflection and defocus blur you needed multiple samples per pixel. Once you have taken a step down
that road, the good news is that almost all effects can be brute-forced. Motion blur is certainly
one of those. In a real camera, the shutter opens and stays open for a time interval, and the camera
and objects may move during that time. Its really an average of what the camera sees over that
interval that we want. We can get a random estimate by sending each ray at some random time when the
shutter is open. As long as the objects are where they should be at that time, we can get the right
average answer with a ray that is at exactly a single time. This is fundamentally why random ray
tracing tends to be simple.
The basic idea is to generate rays at random times while the shutter is open and intersect the model
at that one time. The way it is usually done is to have the camera move and the objects move, but
have each ray exist at exactly one time. This way the “engine” of the ray tracer can just make sure
the objects are where they need to be for the ray, and the intersection guts don’t change much.
<div class='together'>
For this we will first need to have a ray store the time it exists at:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class ray
{
public:
ray() {}
ray(const vec3& a, const vec3& b, float ti = 0.0) { A = a; B = b; _time = ti;}
vec3 origin() const { return A; }
vec3 direction() const { return B; }
float time() const { return _time; }
vec3 point_at_parameter(float t) const { return A + t*B; }
vec3 A;
vec3 B;
float _time;
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
</div>
Now we need to modify the camera to generate rays at a random time between `time1` and `time2`.
Should the camera keep track of `time1` and `time2` or should that be up to the user of camera when
a ray is created? When in doubt, I like to make constructors complicated if it makes calls simple,
so I will make the camera keep track, but that’s a personal preference. Not many changes are needed
to camera because for now it is not allowed to move; it just sends out rays over a time period.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class camera {
public:
camera(vec3 lookfrom, vec3 lookat, vec3 vup,
float vfov /* vfov is top to bottom in degrees */,
float aspect, float aperture, float focus_dist,
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
float t0, float t1) {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
time0 = t0;
time1 = t1;
lens_radius = aperture / 2;
float theta = vfov*M_PI/180;
float half_height = tan(theta/2);
float half_width = aspect * half_height;
origin = lookfrom;
w = unit_vector(lookfrom - lookat);
u = unit_vector(cross(vup, w));
v = cross(w, u);
lower_left_corner = origin
- half_width*focus_dist*u
- half_height*focus_dist*v
- focus_dist*w;
horizontal = 2*half_width*focus_dist*u;
vertical = 2*half_height*focus_dist*v;
}
ray get_ray(float s, float t) {
vec3 rd = lens_radius*random_in_unit_disk();
vec3 offset = u * rd.x() + v * rd.y();
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
float time = time0 + random_double()*(time1-time0);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
return ray(
origin + offset,
lower_left_corner + s*horizontal + t*vertical - origin - offset,
time);
}
vec3 origin;
vec3 lower_left_corner;
vec3 horizontal;
vec3 vertical;
vec3 u, v, w;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
float time0, time1;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
float lens_radius;
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We also need a moving object. I’ll create a sphere class that has its center move linearly from
`center0` at `time0` to `center1` at `time1`. Outside that time interval it continues on, so those
times need not match up with the camera aperture open close.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class moving_sphere: public hittable {
public:
moving_sphere() {}
moving_sphere(vec3 cen0, vec3 cen1, float t0, float t1, float r, material *m)
: center0(cen0), center1(cen1), time0(t0), time1(t1), radius(r), mat_ptr(m)
{};
virtual bool hit(const ray& r, float tmin, float tmax, hit_record& rec) const;
virtual bool bounding_box(float t0, float t1, aabb& box) const;
vec3 center(float time) const;
vec3 center0, center1;
float time0, time1;
float radius;
material *mat_ptr;
};
vec3 moving_sphere::center(float time) const{
return center0 + ((time - time0) / (time1 - time0))*(center1 - center0);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
<div class='together'>
An alternative to making a new moving sphere class is to just make them all move and have the
stationary ones have the same begin and end point. I’m on the fence about that trade-off between
fewer classes and more efficient stationary spheres, so let your design taste guide you. The
intersection code barely needs a change: `center` just needs to become a function `center(time)`:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
bool moving_sphere::hit(
const ray& r, float t_min, float t_max, hit_record& rec) const {
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
vec3 oc = r.origin() - center(r.time());
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
float a = dot(r.direction(), r.direction());
float b = dot(oc, r.direction());
float c = dot(oc, oc) - radius*radius;
float discriminant = b*b - a*c;
if (discriminant > 0) {
float temp = (-b - sqrt(discriminant))/a;
if (temp < t_max && temp > t_min) {
rec.t = temp;
rec.p = r.point_at_parameter(rec.t);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
rec.normal = (rec.p - center(r.time())) / radius;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
rec.mat_ptr = mat_ptr;
return true;
}
temp = (-b + sqrt(discriminant))/a;
if (temp < t_max && temp > t_min) {
rec.t = temp;
rec.p = r.point_at_parameter(rec.t);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
rec.normal = (rec.p - center(r.time())) / radius;
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
rec.mat_ptr = mat_ptr;
return true;
}
}
return false;
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
</div>
<div class='together'>
Be sure that in the materials you have the scattered rays be at the time of the incident ray.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class lambertian : public material {
public:
lambertian(const vec3& a) : albedo(a) {}
virtual bool scatter(const ray& r_in, const hit_record& rec,
vec3& attenuation, ray& scattered) const {
vec3 target = rec.p + rec.normal + random_in_unit_sphere();
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++ highlight
scattered = ray(rec.p, target-rec.p, r_in.time());
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
attenuation = albedo;
return true;
}
vec3 albedo;
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
</div>
<div class='together'>
If we take the example diffuse spheres from scene at the end of the last book and make them move
from their centers at `time==0`, to `center + vec3(0, 0.5*random_double(), 0)` at `time==1`, with
the camera aperture open over that frame.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
hittable *random_scene() {
int n = 50000;
hittable **list = new hittable*[n+1];
list[0] = new sphere(vec3(0,-1000,0), 1000, new lambertian(checker));
int i = 1;
for (int a = -10; a < 10; a++) {
for (int b = -10; b < 10; b++) {
float choose_mat = random_double();
vec3 center(a+0.9*random_double(),0.2,b+0.9*random_double());
if ((center-vec3(4,0.2,0)).length() > 0.9) {
if (choose_mat < 0.8) { // diffuse
list[i++] = new moving_sphere(
center,
center+vec3(0, 0.5*random_double(), 0),
0.0, 1.0, 0.2,
new lambertian(
vec3(random_double()*random_double(),
random_double()*random_double(),
random_double()*random_double())
)
);
}
else if (choose_mat < 0.95) { // metal
list[i++] = new sphere(
center, 0.2,
new metal(
vec3(0.5*(1 + random_double()),
0.5*(1 + random_double()),
0.5*(1 + random_double())),
0.5*random_double()
)
);
}
else { // glass
list[i++] = new sphere(center, 0.2, new dielectric(1.5));
}
}
}
}
list[i++] = new sphere(vec3(0, 1, 0), 1.0, new dielectric(1.5));
list[i++] = new sphere(vec3(-4, 1, 0), 1.0, new lambertian(vec3(0.4, 0.2, 0.1)));
list[i++] = new sphere(vec3(4, 1, 0), 1.0, new metal(vec3(0.7, 0.6, 0.5), 0.0));
return new hittable_list(list,i);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
</div>
<div class='together'>
And with these viewing parameters gives:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
vec3 lookfrom(13,2,3);
vec3 lookat(0,0,0);
float dist_to_focus = 10.0;
float aperture = 0.0;
camera cam(
lookfrom, lookat, vec3(0,1,0), 20, float(nx)/float(ny), aperture,
dist_to_focus, 0.0, 1.0
);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
![Image 2-1](../images/img-2-2-01.jpg)
</div>
Bounding Volume Hierarchies
====================================================================================================
This part is by far the most difficult and involved part of the ray tracer we are working on. I am
sticking it in this chapter so the code can run faster, and because it refactors `hittable` a
little, and when I add rectangles and boxes we won't have to go back and refactor them.
The ray-object intersection is the main time-bottleneck in a ray tracer, and the time is linear with
the number of objects. But it’s a repeated search on the same model, so we ought to be able to make
it a logarithmic search in the spirit of binary search. Because we are sending millions to billions
of rays on the same model, we can do an analog of sorting the model and then each ray intersection
can be a sublinear search. The two most common families of sorting are to 1) divide the space, and
2) divide the objects. The latter is usually much easier to code up and just as fast to run for most
models.
<div class='together'>
The key idea of a bounding volume over a set of primitives is to find a volume that fully encloses
(bounds) all the objects. For example, suppose you computed a bounding sphere of 10 objects. Any ray
that misses the bounding sphere definitely misses all ten objects. If the ray hits the bounding
sphere, then it might hit one of the ten objects. So the bounding code is always of the form:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
if (ray hits bounding object)
return whether ray hits bounded objects
else
return false
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
</div>
A key thing is we are dividing objects into subsets. We are not dividing the screen or the volume.
Any object is in just one bounding volume, but bounding volumes can overlap.
<div class='together'>
To make things sub-linear we need to make the bounding volumes hierarchical. For example, if we
divided a set of objects into two groups, red and blue, and used rectangular bounding volumes, we’d
have:
![Figure 3-1](../images/fig-2-3-01.jpg)
</div>
<div class='together'>
Note that the blue and red bounding volumes are contained in the purple one, but they might
overlap, and they are not ordered -- they are just both inside. So the tree shown on the right has
no concept of ordering in the left and right children; they are simply inside. The code would be:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
if (hits purple)
hit0 = hits blue enclosed objects
hit1 = hits red enclosed objects
if (hit0 or hit1)
return true and info of closer hit
return false
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
</div>
To get that all to work we need a way to make good divisions, rather than bad ones, and a way to
intersect a ray with a bounding volume. A ray bounding volume intersection needs to be fast, and
bounding volumes need to be pretty compact. In practice for most models, axis-aligned boxes work
better than the alternatives, but this design choice is always something to keep in mind if you
encounter unusual types of models.
From now on we will call axis-aligned bounding rectangular parallelepiped (really, that is what they
need to be called if precise) axis-aligned bounding boxes, or AABBs. Any method you want to use to
intersect a ray with an AABB is fine. And all we need to know is whether or not we hit it; we don’t
need hit points or normals or any of that stuff that we need for an object we want to display.
<div class='together'>
Most people use the “slab” method. This is based on the observation that an n-dimensional AABB is
just the intersection of n axis-aligned intervals, often called “slabs” An interval is just
the points between two endpoints, _e.g._, $x$ such that $3 < x < 5$, or more succinctly $x$ in
$(3,5)$. In 2D, two intervals overlapping makes a 2D AABB (a rectangle):
![Figure 3-2](../images/fig-2-3-02.jpg)
</div>
<div class='together'>
For a ray to hit one interval we first need to figure out whether the ray hits the boundaries. For
example, again in 2D, this is the ray parameters $t_0$ and $t_1$. (If the ray is parallel to the
plane those will be undefined.)
![Figure 3-3](../images/fig-2-3-03.jpg)
</div>
<div class='together'>
In 3D, those boundaries are planes. The equations for the planes are $x = x_0$, and $x = x_1$. Where
does the ray hit that plane? Recall that the ray can be thought of as just a function that given a
$t$ returns a location $p(t)$:
$$ p(t) = A + t \cdot B $$
</div>
<div class='together'>
That equation applies to all three of the x/y/z coordinates. For example $x(t) = A_x + t \cdot B_x$.
This ray hits the plane $x = x_0$ at the $t$ that satisfies this equation:
$$ x_0 = A_x + t_0 \cdot B_x $$
</div>
<div class='together'>
Thus $t$ at that hitpoint is:
$$ t_0 = \frac{x_0 - A_x}{B_x} $$
</div>
<div class='together'>
We get the similar expression for $x_1$:
$$ t_1 = \frac{x_1 - A_x}{B_x} $$
</div>
<div class='together'>
The key observation to turn that 1D math into a hit test is that for a hit, the $t$-intervals need
to overlap. For example, in 2D the green and blue overlapping only happens if there is a hit:
![Figure 3-4](../images/fig-2-3-04.jpg)
</div>
<div class='together'>
What “do the t intervals in the slabs overlap?” would like in code is something like:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compute (tx0, tx1)
compute (ty0, ty1)
return overlap?( (tx0, tx1), (ty0, ty1))
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
That is awesomely simple, and the fact that the 3D version also works is why people love the
slab method:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
compute (tx0, tx1)
compute (ty0, ty1)
compute (tz0, tz1)
return overlap?( (tx0, tx1), (ty0, ty1), (tz0, tz1))
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
</div>
<div class='together'>
There are some caveats that make this less pretty than it first appears. First, suppose the ray is
travelling in the negative $x$ direction. The interval $(t_{x0}, t_{x1})$ as computed above might be
reversed, _e.g._ something like $(7, 3)$. Second, the divide in there could give us infinities. And
if the ray origin is on one of the slab boundaries, we can get a `NaN`. There are many ways these
issues are dealt with in various ray tracers’ AABB. (There are also vectorization issues like SIMD
which we will not discuss here. Ingo Wald’s papers are a great place to start if you want to go the
extra mile in vectorization for speed.) For our purposes, this is unlikely to be a major bottleneck
as long as we make it reasonably fast, so let’s go for simplest, which is often fastest anyway!
First let’s look at computing the intervals:
$$ t_{x0} = \frac{x_0 - A_x}{B_x} $$
$$ t_{x1} = \frac{x_1 - A_x}{B_x} $$
</div>
<div class='together'>
One troublesome thing is that perfectly valid rays will have $B_x = 0$, causing division by zero.
Some of those rays are inside the slab, and some are not. Also, the zero will have a ± sign under
IEEE floating point. The good news for $B_x = 0$ is that $t_{x0}$ and $t_{x1}$ will both be +∞ or
both be -∞ if not between $x_0$ and $x_1$. So, using min and max should get us the right answers:
$$ t_{x0} = min(\frac{x_0 - A_x}{B_x}, \frac{x_1 - A_x}{B_x}) $$
$$ t_{x1} = max(\frac{x_0 - A_x}{B_x}, \frac{x_1 - A_x}{B_x}) $$
</div>
The remaining troublesome case if we do that is if $B_x = 0$ and either $x_0 - A_x = 0$ or $x_1-A_x
= 0$ so we get a `NaN`. In that case we can probably accept either hit or no hit answer, but we’ll
revisit that later.
<div class='together'>
Now, let’s look at that overlap function. Suppose we can assume the intervals are not reversed (so
the first value is less than the second value in the interval) and we want to return true in that
case. The boolean overlap that also computes the overlap interval $(f, F)$ of intervals $(d, D)$ and
$(e, E)$ would be:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
bool overlap(d, D, e, E, f, F)
f = max(d, e)
F = min(D, E)
return (f < F)
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
</div>
<div class='together'>
If there are any `NaN`s running around there, the compare will return false so we need to be sure
our bounding boxes have a little padding if we care about grazing cases (and we probably should
because in a ray tracer all cases come up eventually). With all three dimensions in a loop and
passing in the interval $t_{min}$, $t_{max}$ we get:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
inline float ffmin(float a, float b) { return a < b ? a : b; }
inline float ffmax(float a, float b) { return a > b ? a : b; }
class aabb {
public:
aabb() {}
aabb(const vec3& a, const vec3& b) { _min = a; _max = b;}
vec3 min() const {return _min; }
vec3 max() const {return _max; }
bool hit(const ray& r, float tmin, float tmax) const {
for (int a = 0; a < 3; a++) {
float t0 = ffmin((_min[a] - r.origin()[a]) / r.direction()[a],
(_max[a] - r.origin()[a]) / r.direction()[a]);
float t1 = ffmax((_min[a] - r.origin()[a]) / r.direction()[a],
(_max[a] - r.origin()[a]) / r.direction()[a]);
tmin = ffmax(t0, tmin);
tmax = ffmin(t1, tmax);
if (tmax <= tmin)
return false;
}
return true;
}
vec3 _min;
vec3 _max;
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
</div>
<div class='together'>
Note that the built-in `fmax()` is replaced by `ffmax()` which is quite a bit faster because it
doesn’t worry about `NaN`s and other exceptions. In reviewing this intersection method, Andrew
Kensler at Pixar tried some experiments and has proposed this version of the code which works
extremely well on many compilers, and I have adopted it as my go-to method:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
inline bool aabb::hit(const ray& r, float tmin, float tmax) const {
for (int a = 0; a < 3; a++) {
float invD = 1.0f / r.direction()[a];
float t0 = (min()[a] - r.origin()[a]) * invD;
float t1 = (max()[a] - r.origin()[a]) * invD;
if (invD < 0.0f)
std::swap(t0, t1);
tmin = t0 > tmin ? t0 : tmin;
tmax = t1 < tmax ? t1 : tmax;
if (tmax <= tmin)
return false;
}
return true;
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
</div>
<div class='together'>
We now need to add a function to compute bounding boxes to all of the hittables. Then we will make a
hierarchy of boxes over all the primitives and the individual primitives, like spheres, will live at
the leaves. That function returns a bool because not all primitives have bounding boxes (_e.g._,
infinite planes). In addition, objects move so it takes `time1` and `time2` for the interval of the
frame and the bounding box will bound the object moving through that interval.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class hittable {
public:
virtual bool hit(
const ray& r, float t_min, float t_max, hit_record& rec) const = 0;
virtual bool bounding_box(float t0, float t1, aabb& box) const = 0;
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
</div>
<div class='together'>
For a sphere, that `bounding_box` function is easy:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
bool sphere::bounding_box(float t0, float t1, aabb& box) const {
box = aabb(center - vec3(radius, radius, radius),
center + vec3(radius, radius, radius));
return true;
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
</div>
<div class='together'>
For `moving sphere`, we can take the box of the sphere at $t_0$, and the box of the sphere at $t_1$,
and compute the box of those two boxes:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
bool moving_sphere::bounding_box(float t0, float t1, aabb& box) const {
aabb box0(center(t0) - vec3(radius, radius, radius),
center(t0) + vec3(radius, radius, radius));
aabb box1(center(t1) - vec3(radius, radius, radius),
center(t1) + vec3(radius, radius, radius));
box = surrounding_box(box0, box1);
return true;
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
</div>
<div class='together'>
For lists you can store the bounding box at construction, or compute it on the fly. I like doing it
the fly because it is only usually called at BVH construction.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
bool hittable_list::bounding_box(float t0, float t1, aabb& box) const {
if (list_size < 1) return false;
aabb temp_box;
bool first_true = list[0]->bounding_box(t0, t1, temp_box);
if (!first_true)
return false;
else
box = temp_box;
for (int i = 1; i < list_size; i++) {
if(list[i]->bounding_box(t0, t1, temp_box)) {
box = surrounding_box(box, temp_box);
}
else
return false;
}
return true;
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
</div>
<div class='together'>
This requires the `surrounding_box` function for `aabb` which computes the bounding box of two
boxes:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
aabb surrounding_box(aabb box0, aabb box1) {
vec3 small( ffmin(box0.min().x(), box1.min().x()),
ffmin(box0.min().y(), box1.min().y()),
ffmin(box0.min().z(), box1.min().z()));
vec3 big ( ffmax(box0.max().x(), box1.max().x()),
ffmax(box0.max().y(), box1.max().y()),
ffmax(box0.max().z(), box1.max().z()));
return aabb(small,big);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
</div>
<div class='together'>
A BVH is also going to be a `hittable` -- just like lists of `hittable`s. It’s really a container,
but it can respond to the query “does this ray hit you?”. One design question is whether we have two
classes, one for the tree, and one for the nodes in the tree; or do we have just one class and have
the root just be a node we point to. I am a fan of the one class design when feasible. Here is such
a class:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class bvh_node : public hittable {
public:
bvh_node() {}
bvh_node(hittable **l, int n, float time0, float time1);
virtual bool hit(const ray& r, float tmin, float tmax, hit_record& rec) const;
virtual bool bounding_box(float t0, float t1, aabb& box) const;
hittable *left;
hittable *right;
aabb box;
};
bool bvh_node::bounding_box(float t0, float t1, aabb& b) const {
b = box;
return true;
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
</div>
Note that the children pointers are to generic hittables. They can be other `bvh_nodes`, or
`spheres`, or any other `hittable`.
<div class='together'>
The `hit` function is pretty straightforward: check whether the box for the node is hit, and if so,
check the children and sort out any details:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
bool bvh_node::hit(const ray& r, float t_min, float t_max, hit_record& rec) const {
if (box.hit(r, t_min, t_max)) {
hit_record left_rec, right_rec;
bool hit_left = left->hit(r, t_min, t_max, left_rec);
bool hit_right = right->hit(r, t_min, t_max, right_rec);
if (hit_left && hit_right) {
if (left_rec.t < right_rec.t)
rec = left_rec;
else
rec = right_rec;
return true;
}
else if (hit_left) {
rec = left_rec;
return true;
}
else if (hit_right) {
rec = right_rec;
return true;
}
else
return false;
}
else return false;
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
</div>
<div class='together'>
The most complicated part of any efficiency structure, including the BVH, is building it. We do this
in the constructor. A cool thing about BVHs is that as long as the list of objects in a `bvh_node`
gets divided into two sub-lists, the hit function will work. It will work best if the division is
done well, so that the two children have smaller bounding boxes than their parent’s bounding box,
but that is for speed not correctness. I’ll choose the middle ground, and at each node split the
list along one axis. I’ll go for simplicity:
1. randomly choose an axis
2. sort the primitives using library qsort
3. put half in each subtree
</div>
I used the old-school C `qsort` rather than the C++ sort because I need a different compare operator
depending on axis, and `qsort` takes a compare function rather than using the less-than operator. I
pass in a pointer to pointer -- this is just C for “array of pointers” because a pointer in C can
also just be a pointer to the first element of an array.
<div class='together'>
When the list coming in is two elements, I put one in each subtree and end the recursion. The
traverse algorithm should be smooth and not have to check for null pointers, so if I just have one
element I duplicate it in each subtree. Checking explicitly for three elements and just following
one recursion would probably help a little, but I figure the whole method will get optimized later.
This yields:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
bvh_node::bvh_node(hittable **l, int n, float time0, float time1) {
int axis = int(3*random_double());
if (axis == 0)
qsort(l, n, sizeof(hittable *), box_x_compare);
else if (axis == 1)
qsort(l, n, sizeof(hittable *), box_y_compare);
else
qsort(l, n, sizeof(hittable *), box_z_compare);
if (n == 1) {
left = right = l[0];
}
else if (n == 2) {
left = l[0];
right = l[1];
}
else {
left = new bvh_node(l, n/2, time0, time1);
right = new bvh_node(l + n/2, n - n/2, time0, time1);
}
aabb box_left, box_right;
if (!left->bounding_box(time0, time1, box_left) ||
!right->bounding_box(time0, time1, box_right)) {
std::cerr << "no bounding box in bvh_node constructor\n";
}
box = surrounding_box(box_left, box_right);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
</div>
The check for whether there is a bounding box at all is in case you sent in something like an
infinite plane that doesn’t have a bounding box. We don’t have any of those primitives, so it
shouldn’t happen until you add such a thing.
The compare function has to take void pointers which you cast. This is old-school C and reminded me
why C++ was invented. I had to really mess with this to get all the pointer junk right. If you like
this part, you have a future as a systems person!
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
int box_x_compare (const void * a, const void * b) {
aabb box_left, box_right;
hittable *ah = *(hittable**)a;
hittable *bh = *(hittable**)b;
if (!ah->bounding_box(0,0, box_left) || !bh->bounding_box(0,0, box_right))
std::cerr << "no bounding box in bvh_node constructor\n";
if (box_left.min().x() - box_right.min().x() < 0.0)
return -1;
else
return 1;
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Solid Textures
====================================================================================================
A texture in graphics usually means a function that makes the colors on a surface procedural. This
procedure can be synthesis code, or it could be an image lookup, or a combination of both. We will
first make all colors a texture. Most programs keep constant rgb colors and textures different
classes so feel free to do something different, but I am a big believer in this architecture because
being able to make any color a texture is great.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class texture (
public:
virtual vec3 value(float u, float v, const vec3& p) const = O;
};
class constant_texture : public texture {
public:
constant_texture() {}
constant_texture(vec3 c) : color(c) {}
virtual vec3 value(float u, float v, const vec3& p) const {
return color;
}
vec3 color;
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
<div class='together'>
Now we can make textured materials by replacing the vec3 color with a texture pointer:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class lambertian : public material {
public:
lambertian(texture *a) : albedo(a) {}
virtual bool scatter(const ray& r_in, const hit_record& rec,
vec3& attenuation, ray& scattered) const {
vec3 target = rec.p + rec.normal + random_in_unit_sphere();
scattered = ray(rec.p, target - rec.p);
attenuation = albedo->value(0, 0, rec.p);
return true;
}
texture *albedo;
};
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
</div>
<div class='together'>
where you used to have
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
new lambertian(vec3(0.5, 0.5, 0.5)))
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
now you should replace the `vec3(...)` with `new constant_texture(vec3(...))`
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
new lambertian(new constant_texture(vec3(0.5, 0.5, 0.5))))
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
</div>
<div class='together'>
We can create a checker texture by noting that the sign of sine and cosine just alternates in a
regular way and if we multiply trig functions in all three dimensions, the sign of that product
forms a 3D checker pattern.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class checker_texture : public texture {
public:
checker_texture() {}
checker_texture(texture *t0, texture *tl): even(t0), odd(t1) {}
virtual vec3 value(float u, float v, const vec3& p) const {
float sines = sin(10*p.x())*sin(10*p.y())*sin(10*p.z());
if (sines < 0)
return odd—>value(u, v, p);
else
return even->value(u, v, p);
}
texture *odd;
texture *even;
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
</div>
Those checker odd/even pointers can be to a constant texture or to some other procedural texture.
This is in the spirit of shader networks introduced by Pat Hanrahan back in the 1980s.
<div class='together'>
If we add this to our random_scene() function’s base sphere:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
texture *checker = new checker_texture(
new constant_texture(vec3(0.2, 0.3, 0.1)),
new constant_texture(vec3(0.9, 0.9, 0.9))
);
list[0] = new sphere(vec3(0,-1000,0), 1000, new lambertian(checker));
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We get:
![Image 4-1](../images/img-2-4-01.jpg)
</div>
<div class='together'>
If we add a new scene:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
hittable *two_spheres() {
texture *checker = new checker_texture(
new constant_texture(vec3(0.2, 0.3, 0.1)),
new constant_texture(vec3(0.9, 0.9, 0.9))
);
int n = 50;
hittable **list = new hitable*[n+1];
list[0] = new sphere(vec3(0,-10, 0), 10, new lambertian(checker));
1ist[1] = new sphere(vec3(0, 10, 0), 10, new lambertian(checker));
return new hittable_list(list,2);
}
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
With camera:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
vec3 lookfrom(13,2,3);
vec3 lookat(0,0,0);
float dist_to_focus = 10.0;
float aperture = 0.0;
camera cam(lookfrom, lookat, vec3(0,1,0), 20, float(nx)/float(ny),
aperture, dist_to_focus, 0.0, 1.0);
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
We get:
![Image 4-2](../images/img-2-4-02.jpg)
</div>
Perlin Noise
====================================================================================================
<div class='together'>
To get cool looking solid textures most people use some form of Perlin noise. These are named after
their inventor Ken Perlin. Perlin texture doesn’t return white noise like this:
![Image 5-1](../images/img-2-5-01.jpg)
Instead it returns something similar to blurred white noise:
![Image 5-2](../images/img-2-5-02.jpg)
</div>
A key part of Perlin noise is that it is repeatable: it takes a 3D point as input and always returns
the same randomish number. Nearby points return similar numbers. Another important part of Perlin
noise is that it be simple and fast, so it’s usually done as a hack. I’ll build that hack up
incrementally based on Andrew Kensler’s description.
<div class='together'>
We could just tile all of space with a 3D array of random numbers and use them in blocks. You get
something blocky where the repeating is clear:
![Image 5-3](../images/img-2-5-03.jpg)
</div>
<div class='together'>
Let’s just use some sort of hashing to scramble this, instead of tiling. This has a bit of support
code to make it all happen:
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C++
class perlin {
public:
float noise(const vec3& p) const {
float u = p.x() - floor(p.x());
float v = p.y() - floor(p.y());
float w = p.z() - floor(p.z());
int i = floor(p.x());
int j = floor(p.y());
int k = floor(p.z());
return ranfloat[perm_x[i] ^ perm_y[j] ^ perm_z[k]];
}
static float *ranfloat;
static int *perm_x;
static int *perm_y;
static int *perm_z;
};
static vec3* perlin_generate() {
float * p = new float[256];
for (int i = 0; i < 256; ++i)
p[i] = random_double();
return p;
}
void permute(int *p, int n) {
for (int i = n-1; i > 0; i--) {
int target = int(random_double()*(i+1));
int tmp = p[i];
p[i] = p[target];
p[target] = tmp;
}
return;
}