How large is my 2012 MacBook Air's L1 cache? Are they larger now?
1
vote
1
answer
504
views
Discussion in comments below [this answer](https://codereview.stackexchange.com/a/235222/145009) to *improving speed of this numpy-based diffraction calculator* suggest that the reason the script in the answer runs slow for me but fast for others (even on earlier versions of NumPy) might be that my late 2012 MacBook Air's processor L1 cache might be smaller than others. It could also be that I am running dangerously low on disk space (I was seeing 40 MB/sec reads and writes while running).
I'm curious though, what is the size of the on-processor L1 cache for my late 2012 MacBook Air, and how does it compare to new MacBooks?
MacBook Air: 13-inch, Mid 2012
Processor: 1.8 GHz Intel Core i5
Memory: 4 GB 1600 MHz DDR3
Hard Drive: 251 GB Flash Storage
I'm not a developer, but I did a small test. Running the script below I see that multiplication of two NumPy arrays is fastest (a few nanoseconds per float multiply) when the array size is about 10^4. Each is about 8 bytes, **so I'm estimating that my L1 cache size is about 10^5 Bytes.**
Is that close?
**note:** I estimate time using both
time.time()
and time.process_time()
. The former (blue, solid line, lower values) is "people time", how long I have to wait for something to finish.
[](https://i.sstatic.net/RrN4F.png)
import numpy as np
import matplotlib.pyplot as plt
import time
Ns = np.logspace(1, 8, 15).astype(int)
t1, t2 = [], []
for N in Ns:
x = np.random.random(N)
t1_start = time.time()
t2_start = time.process_time()
n = int(1E+06/N)
for i in range(n):
y = x*x
t1.append((time.time() - t1_start)/(N*n))
t2.append((time.process_time() - t2_start)/(N*n))
if True:
plt.figure()
plt.plot(Ns, t1)
plt.plot(Ns, t2, '--')
plt.xscale('log')
plt.yscale('log')
plt.title('estimated time (sec) per float multiply vs array size', fontsize=14)
plt.show()
Asked by uhoh
(1877 rep)
Jan 9, 2020, 08:19 AM
Last activity: Jan 9, 2020, 11:19 AM
Last activity: Jan 9, 2020, 11:19 AM