Today’s lab purpose was to see the difference in selecting algorithms when dealing with a problem. The challenge was to create a program that is capable of adjusting volumes on a sequence of sound samples in a range of 0.000 to 1.000.
Designing the code – Solution A
For first design, we created a simple table that stored a range of random value ( o-36750 ) to simulate a sound sample. Using that table we multiplied each index by the specified volume increment. The program looks as follows :
Note : We only want to time the difference in calculating the time for actually processing the sound adjustment and not the initialization, hence the timer starts after all initialization is complete.
Designing the code – Solution B
Second design involved creating a table with pre-calculated values that we use to modify the sound chunk with.
Note : As you can see in this variation of the program, most of the calculations are done before our timer, and we only need to have the access to the index which holds our results.
Analyzing The Result
- Various optimization levels had different results, with -O3 optimization flag being the fastest improvement by about 6 times over zero optimizations.
- Distribution of data does not matter, as in both cases the code executes every peice of information in the array.
- Samples fed at 44100 samples per second at two channels, can be handled by both algorithms as current rate of samples per second is 125 million.
- Memory footprint of the second approach is slightly larger due to having an extra pre compiled table that it needs to loop up from.
- The performance results are as follow :
- The simple multiplication solution is faster as an algorithm in this case, but I believe it is due to processor being able to calculate the needed data in a very fast time, on a slower processor I think we can see the table algorithm catching up if not even overtaking in terms of speed, as long as there is enough memory to supply the needed pre-computed values.