Can you help me please?
Say CPU wants to read data from main memory:
- at time T1 CPU signals the address of memory location it wants
to read data from
- at time T2 the control signal tells the type of access CPU wants
to establish ( read access )
- at time T3 RAM puts on data bus the data requested by CPU
- when CPU receives data, it removes the control and address signals
-that causes main memory to remove, at time T4, data signals from
1) How do you call the time period between T1 and T5 ( I assume memory
access time )?
2) How do you call time period between T2 and T3?
3) Shouldnt the speed of RAM be specified with how long the time
period between T2 and T3 is? I assume this cos time periods T1-T2
and T3-T4 dont depend on the speed of RAM.
"The CAS Latency is the number of clock cycles that elapse from the time the request for data is sent to the actual memory location until the data is transmitted from the module."
Memory is labeled and marketed with CAS latency in the memory's feature list.
: I think you're referring to this:
: "The CAS Latency is the number of clock cycles that elapse from the time the request for data is sent to the actual memory location until the data is transmitted from the module."
If understood this correctly, then CAS latency tells the number of clock cycles from the time the request for data ARRIVES to memory (I differentiate between request being send towards memory and between request actually being received by memory ) and the time data is transmitted from the memory?
Following my example in previous post, wouldn't CAS latency be time period from T2 to T3?
: Following my example in previous post, wouldn't CAS latency be time period from T2 to T3?
This is quickly getting over my head, but I'd suspect that in your original example, T1 and T2 are the same. Why couldn't the address and read/write bit be merged into the same clock?
: (I differentiate between request being send towards memory and between request actually being received by memory )
For any practical discussion, this time differences should be considered zero. Are you concerned with the amount of time it physically would take electricity to flow the 1-2 inches from the memory controller to the memory? The address and read/write signal are all sent from the memory controller to memory in parallel, so in effect, a request coming from the memory controller and the memory receiving the request should be the same thing.
So yes, you are correct in that CAS latency is essentially the "turn around" time it takes for that piece of memory to see a request and have data ready to be read. By having a known CAS latency, the memory controller doesn't have to poll for status from the memory or have any other sort of "I'm ready!" signal coming from the memory. With a CAS latency of 3, a request is shoved out, the controller simply waits (or does other stuff) for exactly 3 clock cycles, and then the memory is read. This is why a stick of memory actually tells the PC how fast its CAS latency is, and then that information is programmed into the memory controller so the 2 are in sync.