Mine output on the left in jupyter/spyder VS. Output on the right from dataquest built in compiler:
figure 1.
My Code:
import time
import random
import matplotlib.pyplot as plt
def plot_times(times):
plt.plot(times)
plt.ylabel('runtime')
plt.xlabel('size')
plt.show()
def sum_values(values):
total = 0
for value in values:
total += value
return total
def gen_input(length):
return [random.randint(-1000, 1000) for _ in range(length)]
# add your code below
times = []
for length in range(1, 501):
values = gen_input(length)
start = time.time()
sum_values(values)
end = time.time()
times.append(end-start)
print(times)
plot_times(times)
What I expected to happen:
I expected same output as the one from DQ, with scaled versions of the runtime output.
What actually happened:
Instead mine outputs, in jupyter and spyder, ignored some very small values - and simply denoted them as zero’s. I assume fast cpu clock speeds does not have anything to do with the output.
edit: further investigations/details. I did indeed notice a performance difference between mine runtime on 10th generation CPU vs the runtime you guys posted on m/476 (p. 4/11). The graph on the left depicts when i run (code beneath) the maximum() function on page 3/11 vs. the right graph which you guys ran on your system.
figure 2.
At a sample size of 500 array-elements if shows a runtime difference of:
left: 0.00008 s
right: 0.0006 s
The difference testing I ran of following code both came from the compiler on your server/DQ. Which depict the graphs on figure 2.
import time
import random
import matplotlib.pyplot as plt
def maximum(values):
answer = None
for value in values:
if answer == None or answer < value:
answer = value
return answer
def gen_input(length):
return [random.randint(-1000, 1000) for _ in range(length)]
# add your code below
times = []
for length in range(1, 501):
values = gen_input(length)
start = time.time()
maximum(values)
end = time.time()
runtime = end - start
times.append(runtime)
print(time)
plt.plot(times)