Hii, Good day!!
I have been trying to wrap my head around the fetch stage of the o3 CPU
model.
And this one thing bugs me: why does the macroop - > microop conversion
(where the ISA specific decoder is called) take place in the fetch stage.
Is there any particular reason why it could not have been done in the
decode stage?
My understanding from the code is that the branch predictor needs these
microops(which are used to build DynamicInst) for prediction. But If so,
why couldn't macroops be used for this?
If the question seems to be trivial, sorry for that..
Any help would be appreciated, thanks,
Vidit
Hii, Good day!!
I have been trying to wrap my head around the fetch stage of the o3 CPU
model.
And this one thing bugs me: why does the macroop - > microop conversion
(where the ISA specific decoder is called) take place in the fetch stage.
Is there any particular reason why it could not have been done in the
decode stage?
My understanding from the code is that the branch predictor needs these
microops(which are used to build DynamicInst) for prediction. But If so,
why couldn't macroops be used for this?
If the question seems to be trivial, sorry for that..
Any help would be appreciated, thanks,
Vidit