How do I compare the BogoMIPS of ARM based computers and Intel base computers?
1
vote
0
answers
2240
views
Looking at my ARM based Jetson computers, I see a BogoMIPS of **62.5**.
On my Intel based computers, with Xeons, I see **4,200**.
I've found this note on [Wikipedia](https://en.wikipedia.org/wiki/BogoMips)
> ## Timer-based delays
>
> In 2012, ARM contributed a new udelay implementation allowing the system timer built into many ARMv7 CPUs to be used instead of a busy-wait loop. This implementation was released in Version 3.6 of the Linux kernel. Timer-based delays are more robust on systems that use frequency scaling to dynamically adjust the processor's speed at runtime, as loops_per_jiffies values may not necessarily scale linearly. Also, since the timer frequency is known in advance, no calibration is needed at boot time.
>
> One side effect of this change is that the BogoMIPS value will reflect the timer frequency, not the CPU's core frequency. Typically the timer frequency is much lower than the processor's maximum frequency, and some users may be surprised to see an unusually low BogoMIPS value when comparing against systems that use traditional busy-wait loops.
However, that doesn't really tell me how to convert between both CPUs.
My ARM is 1.2Ghz and my Intel is 2.1Ghz (boosts to 4.1Ghz or so).
So I'm sure that the numbers would be closer if both platforms used the same algorithm, I'm just not too sure how to interpret that info... So the question is: How do I convert one BogoMIPS to the other?
Asked by Alexis Wilke
(3095 rep)
Mar 2, 2021, 08:13 PM