The second branch predictor generates branch prediction data at a later time and with higher prediction accuracy.
In one embodiment, a branch history management circuit is configured to process a branch prediction table swap instruction.
The apparatus stores information indicating the outcome of previous executions and predictions of the multiple-target branch instruction in a branch prediction table (12).
In one embodiment, the branch prediction mechanism attempts to predict up to two branch instructions per clock cycle.
The branch prediction unit additionally stores a set of return selectors corresponding to one or more branch predictions.
In some cases, this arrangement reduces the area of the branch prediction unit, as well as power consumption.
This mechanism permits the branch predictor to maintain its fidelity, and eliminates spurious flushes of the pipeline.
The purpose of the present invention is to improve the accuracy of branch prediction when executing a thread with highly granular parallel processing.
The code translator may insert a branch instruction in the translated code sequence which bypasses instructions to update the stack pointer and the program counter, based on the branch prediction.
The branch prediction mechanism may select the target address from the displacement field of the relative branch instruction instead of performing an addition to generate the target address.
Swapping branch direction history(ies) in response to a branch prediction table swap instruction(s), and related systems and methods are disclosed.
In this manner, branch direction history sets assigned to particular software code regions are used for branch prediction when processing the particular software code regions.
The prefetch unit (108) may fetch instructions from the instruction cache (106) until the branch prediction unit (132) outputs a predicted target address for a branch instruction.
As each branch instruction is fetched and decoded, its address is used to perform parallel look-ups in the two branch prediction caches.