This time, it's "Indirector".
https://thehackernews.com/2024/07/new-intel-cpu-vulnerability-indirector.html
The same article also mentions that ARM have their own issue
with the Memory Tagging Extension.
This time, it's "Indirector".
https://thehackernews.com/2024/07/new-intel-cpu-vulnerability-indirector.html
The same article also mentions that ARM have their own issue
with the Memory Tagging Extension.
This time, it's "Indirector".
https://thehackernews.com/2024/07/new-intel-cpu-vulnerability-indirector.html
On Wed, 24 Jul 2024 21:05:51 -0000 (UTC)
Thomas Koenig <tkoenig@netcologne.de> wrote:
This time, it's "Indirector".
https://thehackernews.com/2024/07/new-intel-cpu-vulnerability-indirector.html
The same article also mentions that ARM have their own issue
with the Memory Tagging Extension.
Once, in order to be considered attack, security researcher had to
create POC exploit, no matter how unrealistic in the setup, but at least showing that the principle can work.
No more, it seems.
Today a simple hand-waving about possibility of attack is fully
sufficient.
One thing they mention is Intel and AMD incorporating privilege level
tagging into the BTB, as I suggested when this all started.
Combine that with purging the user mode entries from the predictor tables
on thread switch and I would think that would shut this all down.
EricP <ThatWouldBeTelling@thevillage.com> writes:
One thing they mention is Intel and AMD incorporating privilege level
tagging into the BTB, as I suggested when this all started.
Combine that with purging the user mode entries from the predictor tables
on thread switch and I would think that would shut this all down.
1) The attacker can still attack the context (even if the notion of
context includes the privilege level) from within itself. E.g.,
the kernel can be attacked by training the kernel-level branch
prediction by performing appropriate system calls, and then
performing a system call that reveals data through a
mis-speculation side channel. IIRC such Spectre attacks have
already been demonstrated years ago.
2) Users are supposedly not prepared to pay the cost of invisible
speculation (-5-20%, depending on which paper you read) , are they
prepared to pay the cost of purging the user-mode entries of branch
predictors on thread switches?
My guess is that the stuff plays out as usual: The hardware
manufacturers don't want to implement a proper fix like invisible
speculation, and they suggest software mitigations like purging
user-mode entries on thread switch. The software people then
usually consider the mitigation too expensive in performance or in
development effort, so only a miniscule amount of software contains
Spectre mitigations.
- anton
Anton Ertl wrote:
EricP <ThatWouldBeTelling@thevillage.com> writes:
One thing they mention is Intel and AMD incorporating privilege level
tagging into the BTB, as I suggested when this all started.
Combine that with purging the user mode entries from the predictor
tables
on thread switch and I would think that would shut this all down.
1) The attacker can still attack the context (even if the notion of
context includes the privilege level) from within itself. E.g.,
the kernel can be attacked by training the kernel-level branch
prediction by performing appropriate system calls, and then
performing a system call that reveals data through a
mis-speculation side channel. IIRC such Spectre attacks have
already been demonstrated years ago.
I hadn't thought of this but yes, if JavaScript can contain remotely exploitable gadgets then syscall might too. And its not just syscall
args
but any values that enter the kernel from outside that are used as
indexes
after bounds checking. So the image file mapper, network packets, etc.
But if I recall correctly the fix for JavaScript was something like
a judiciously placed FENCE instruction to block speculation.
And for the kernel this attack surface should be quite small as all of
these values are already validated.
So wouldn't it just be a matter of replacing certain kernel value
validation IF statements with IF_NO_SPECULATE?
2) Users are supposedly not prepared to pay the cost of invisible
speculation (-5-20%, depending on which paper you read) , are they
prepared to pay the cost of purging the user-mode entries of branch
predictors on thread switches?
Its actually thread switches that also switch the process because if the
new thread is in the same process then there is no security domain
switch.
Plus that peer thread could likely make use of the old user mode
predictions.
I have difficulty believing that the branch predictor values from some
thread in one process would be anything but a *negative* impact on a
random different thread in a different process.
Because if you retain
the predictor values then the new thread has to unlearn what it learned, before it starts to learn values for the new thread. Whereas if the
predictor is flushed it can immediately learn its own values.
My guess is that the stuff plays out as usual: The hardware
manufacturers don't want to implement a proper fix like invisible
speculation, and they suggest software mitigations like purging
user-mode entries on thread switch. The software people then
usually consider the mitigation too expensive in performance or in
development effort, so only a miniscule amount of software contains
Spectre mitigations.
- anton
EricP <ThatWouldBeTelling@thevillage.com> writes:
One thing they mention is Intel and AMD incorporating privilege level >tagging into the BTB, as I suggested when this all started.
Combine that with purging the user mode entries from the predictor
tables on thread switch and I would think that would shut this all
down.
1) The attacker can still attack the context (even if the notion of
context includes the privilege level) from within itself. E.g.,
the kernel can be attacked by training the kernel-level branch
prediction by performing appropriate system calls, and then
performing a system call that reveals data through a
mis-speculation side channel. IIRC such Spectre attacks have
already been demonstrated years ago.
2) Users are supposedly not prepared to pay the cost of invisible
speculation (-5-20%, depending on which paper you read) , are they
prepared to pay the cost of purging the user-mode entries of branch
predictors on thread switches?
My guess is that the stuff plays out as usual: The hardware
manufacturers don't want to implement a proper fix like invisible
speculation, and they suggest software mitigations like purging
user-mode entries on thread switch. The software people then
usually consider the mitigation too expensive in performance or in
development effort, so only a miniscule amount of software contains
Spectre mitigations.
- anton
On Fri, 26 Jul 2024 20:27:04 +0000, EricP wrote:
Anton Ertl wrote:
EricP <ThatWouldBeTelling@thevillage.com> writes:
One thing they mention is Intel and AMD incorporating privilege level
tagging into the BTB, as I suggested when this all started.
Combine that with purging the user mode entries from the predictor
tables
on thread switch and I would think that would shut this all down.
1) The attacker can still attack the context (even if the notion of
context includes the privilege level) from within itself. E.g.,
the kernel can be attacked by training the kernel-level branch
prediction by performing appropriate system calls, and then
performing a system call that reveals data through a
mis-speculation side channel. IIRC such Spectre attacks have
already been demonstrated years ago.
I hadn't thought of this but yes, if JavaScript can contain remotely
exploitable gadgets then syscall might too. And its not just syscall
args
but any values that enter the kernel from outside that are used as
indexes
after bounds checking. So the image file mapper, network packets, etc.
But if I recall correctly the fix for JavaScript was something like
a judiciously placed FENCE instruction to block speculation.
And for the kernel this attack surface should be quite small as all of
these values are already validated.
So wouldn't it just be a matter of replacing certain kernel value
validation IF statements with IF_NO_SPECULATE?
2) Users are supposedly not prepared to pay the cost of invisible
speculation (-5-20%, depending on which paper you read) , are they
prepared to pay the cost of purging the user-mode entries of branch
predictors on thread switches?
Its actually thread switches that also switch the process because if the
new thread is in the same process then there is no security domain
switch.
Plus that peer thread could likely make use of the old user mode
predictions.
I have difficulty believing that the branch predictor values from some
thread in one process would be anything but a *negative* impact on a
random different thread in a different process.
47 threads in one process all crunching on one great big array/matrix.
This will show almost complete positive impact on sharing the BP.
the predictor values then the new thread has to unlearn what it learned,
before it starts to learn values for the new thread. Whereas if the
predictor is flushed it can immediately learn its own values.
The BP only has 4-states as 2-bits, anything you initialize its state
to will take nearly as long to seed as a completely random table one
inherits from the previous process. {{BTBs are different}}
MitchAlsup1 wrote:
On Fri, 26 Jul 2024 20:27:04 +0000, EricP wrote:
Anton Ertl wrote:
EricP <ThatWouldBeTelling@thevillage.com> writes:
One thing they mention is Intel and AMD incorporating privilege level >>>>> tagging into the BTB, as I suggested when this all started.
Combine that with purging the user mode entries from the predictor
tables
on thread switch and I would think that would shut this all down.
1) The attacker can still attack the context (even if the notion of
context includes the privilege level) from within itself. E.g.,
the kernel can be attacked by training the kernel-level branch
prediction by performing appropriate system calls, and then
performing a system call that reveals data through a
mis-speculation side channel. IIRC such Spectre attacks have
already been demonstrated years ago.
I hadn't thought of this but yes, if JavaScript can contain remotely
exploitable gadgets then syscall might too. And its not just syscall
args
but any values that enter the kernel from outside that are used as
indexes
after bounds checking. So the image file mapper, network packets, etc.
But if I recall correctly the fix for JavaScript was something like
a judiciously placed FENCE instruction to block speculation.
And for the kernel this attack surface should be quite small as all of
these values are already validated.
So wouldn't it just be a matter of replacing certain kernel value
validation IF statements with IF_NO_SPECULATE?
2) Users are supposedly not prepared to pay the cost of invisible
speculation (-5-20%, depending on which paper you read) , are they
prepared to pay the cost of purging the user-mode entries of branch >>>> predictors on thread switches?
Its actually thread switches that also switch the process because if the >>> new thread is in the same process then there is no security domain
switch.
Plus that peer thread could likely make use of the old user mode
predictions.
I have difficulty believing that the branch predictor values from some
thread in one process would be anything but a *negative* impact on a
random different thread in a different process.
47 threads in one process all crunching on one great big array/matrix.
This will show almost complete positive impact on sharing the BP.
And, as I said above, if the threads are in the same
process/address-space
then the BP should be preserved across that switch. But not if there
were
other intervening processes on the same core.
the predictor values then the new thread has to unlearn what it learned, >>> before it starts to learn values for the new thread. Whereas if the
predictor is flushed it can immediately learn its own values.
The BP only has 4-states as 2-bits, anything you initialize its state
to will take nearly as long to seed as a completely random table one
inherits from the previous process. {{BTBs are different}}
Admittedly I am going on intuition here but it is based on the
assumption
that a mispredicted taken branch that initiates a non-sequential fetch
is costlier than a mispredicted untaken branch that continues
sequentially.
In other words, assuming that resetting *ALL* branch predictors to
untaken,
not just conditional branches but indirect branches and CALL/RET too,
and fetching sequentially is always cheaper than fetching off in random directions at random points. Because fetching sequentially uses
resources
of I-TLB, I$L1 and prefetch buffer that are already loaded,
whereas non-sequential mispredictions will initiate unnecessary loads
of essentially random information.
It also depends on how quickly in the pipeline the mispredict can be detected, some can be detected at Decode and others not until execute,
and how quickly unnecessary pending loads can be canceled and the
correct flow reestablished.
Anton Ertl wrote:[...]
But if I recall correctly the fix for JavaScript was something like
a judiciously placed FENCE instruction to block speculation.
And for the kernel this attack surface should be quite small as all of
these values are already validated.
So wouldn't it just be a matter of replacing certain kernel value
validation IF statements with IF_NO_SPECULATE?
I have difficulty believing that the branch predictor values from some
thread in one process would be anything but a *negative* impact on a
random different thread in a different process.
Because if you retain
the predictor values then the new thread has to unlearn what it learned, >before it starts to learn values for the new thread. Whereas if the
predictor is flushed it can immediately learn its own values.
EricP <ThatWouldBeTelling@thevillage.com> writes:
Anton Ertl wrote:[...]
But if I recall correctly the fix for JavaScript was something like
a judiciously placed FENCE instruction to block speculation.
"A"? IIRC Speculative load hardening inserts LFENCE instructions in
lots of places, IIRC between every branch and a subsequent load. And
it relies on an Intel-specific non-architectural (and thus not
documented in the architecture manual) side effect of LFENCE, and
AFAIK on AMD CPUs LFENCE does not have that side effect. And the
slowdowns I have seen in papers about speculative load hardening have
been in the region 2.3-2.5.
And for the kernel this attack surface should be quite small as all of >>these values are already validated.
Which values are already validated and how does that happen?
What I read about the Linux kernel is that for Spectre V1 the kernel developers try to put mitigations in those places where potential attacker-controlled data is expected; one such mitigation is to turn (predicted) control flow into (non-predicted) data flow. The problem
with that approach is that they can miss such a place, and even if it
works, it's extremely expensive in developer time.
As for missing such places that actually does happen: I read one paper
or web page where a security researcher needed some hole in the
Spectre defense of the kernel for his work (I don't remember what that
was) and thanked somebody else for providing information about such a
hole. I am sure this hole is fixed in newer versions of the kernel,
but who knows how many yet-undiscovered (by white hats) holes exist?
This shows that this approach to dealing with Spectre is not a good
long-term solution.
So wouldn't it just be a matter of replacing certain kernel value >>validation IF statements with IF_NO_SPECULATE?
It's a little bit different, but the major issue here is which
"certain kernel value validation IF statements" should be hardened.
You can, e.g., apply ultimate speculative load hardening across the
whole kernel, and the kernel will slow down by a factor of about 2.5;
and that would fix just Spectre v1 and maybe a few others, but not all Spectre-type vulnerabilities.
I have difficulty believing that the branch predictor values from some >>thread in one process would be anything but a *negative* impact on a
random different thread in a different process.
This sounds very similar to the problem of aliasing of two different
branches in the branch predictor. The branch predictor researchers
have looked into that, and found that it does not pay off to tag
predictions with the branches they are for. The aliased branch is at
least as likely to benefit from the prediction as it is to suffer from interference; as a further measure agree predictors [sprangle+97] were proposed; I don't know if they ever made it into practical
application.
As for the idea of erasing the branch predictor on process switch:
Consider the case where your CPU-bound process has to make way for a
short time slice of an I/O-bound process, and once that has submitted
its next synchronous I/O request, your CPU-bound process gets control
again. The I/O bound process tramples only over a small part of
branch predictor state, but if you erase on process switch, all the
branch preductor state will be gone when the CPU-bound process gets
the CPU core again. That's the reason why we do not erase
microarchitectural state on context switch; we do it neither for
caches nor for branch predictors.
Moreover, another process will likely use some of the same libraries
the earlier process used, and will benefit from having the branches in
the library predicted (unless ASLR prevents them from using the same
entries in the branch predictor).
@InProceedings{sprangle+97,
author = {Eric Sprangle and Robert S. Chappell and Mitch Alsup
and Yale N. Patt},
title = {The Agree Predictor: A Mechanism for Reducing
Negative Branch History Interference},
crossref = {isca97},
pages = {284--291},
annote = {Reduces the number of conflict mispredictions by
having the predictor entries predict whether or not
some other predictor (say, a static predictor) is
correct. This increases the chance that the
predicted direction is correct in case of a
conflict.}
}
@Proceedings{isca97,
title = "$24^\textit{th}$ Annual International Symposium on Computer Architecture",
booktitle = "$24^\textit{th}$ Annual International Symposium on Computer Architecture",
year = "1997",
key = "ISCA 24",
}
Because if you retain
the predictor values then the new thread has to unlearn what it learned, >>before it starts to learn values for the new thread. Whereas if the >>predictor is flushed it can immediately learn its own values.
Unlearn? The only thing I can think about in that direction is that a two-bit counter (for some history and maybe branch address) happens to
be in a state where two instead of one misprediction is necessary
before the prediction changes. Anyway, branch prediction research has
looked into the issue a long time ago and found that erasing on
context switch is a net loss.
- anton
On Sat, 27 Jul 2024 19:31:39 +0000, EricP wrote:
MitchAlsup1 wrote:
On Fri, 26 Jul 2024 20:27:04 +0000, EricP wrote:
Anton Ertl wrote:
EricP <ThatWouldBeTelling@thevillage.com> writes:
One thing they mention is Intel and AMD incorporating privilege level >>>>>> tagging into the BTB, as I suggested when this all started.
Combine that with purging the user mode entries from the predictor >>>>>> tables
on thread switch and I would think that would shut this all down.
1) The attacker can still attack the context (even if the notion of
context includes the privilege level) from within itself. E.g.,
the kernel can be attacked by training the kernel-level branch
prediction by performing appropriate system calls, and then
performing a system call that reveals data through a
mis-speculation side channel. IIRC such Spectre attacks have
already been demonstrated years ago.
I hadn't thought of this but yes, if JavaScript can contain remotely
exploitable gadgets then syscall might too. And its not just syscall
args
but any values that enter the kernel from outside that are used as
indexes
after bounds checking. So the image file mapper, network packets, etc. >>>>
But if I recall correctly the fix for JavaScript was something like
a judiciously placed FENCE instruction to block speculation.
And for the kernel this attack surface should be quite small as all of >>>> these values are already validated.
So wouldn't it just be a matter of replacing certain kernel value
validation IF statements with IF_NO_SPECULATE?
2) Users are supposedly not prepared to pay the cost of invisible
speculation (-5-20%, depending on which paper you read) , are they >>>>> prepared to pay the cost of purging the user-mode entries of branch >>>>> predictors on thread switches?
Its actually thread switches that also switch the process because if
the
new thread is in the same process then there is no security domain
switch.
Plus that peer thread could likely make use of the old user mode
predictions.
I have difficulty believing that the branch predictor values from some >>>> thread in one process would be anything but a *negative* impact on a
random different thread in a different process.
47 threads in one process all crunching on one great big array/matrix.
This will show almost complete positive impact on sharing the BP.
And, as I said above, if the threads are in the same
process/address-space
then the BP should be preserved across that switch. But not if there
were
other intervening processes on the same core.
Could be independent processes crunching on mmap() memory.
the predictor values then the new thread has to unlearn what it
learned,
before it starts to learn values for the new thread. Whereas if the
predictor is flushed it can immediately learn its own values.
The BP only has 4-states as 2-bits, anything you initialize its state
to will take nearly as long to seed as a completely random table one
inherits from the previous process. {{BTBs are different}}
Admittedly I am going on intuition here but it is based on the
assumption
that a mispredicted taken branch that initiates a non-sequential fetch
is costlier than a mispredicted untaken branch that continues
sequentially.
BTBs are supposed to get rid of much of the non-sequential Fetch delay.
In other words, assuming that resetting *ALL* branch predictors to
untaken,
I think you mean weakly untaken: not just untaken.
not just conditional branches but indirect branches and CALL/RET too,
Certainly CALL/RET as we are now in a completely different context.
But note: in My 66000 switches are not indirect branches.....
method calls and external subroutines are--these are easier to
predict !! than switches.
and fetching sequentially is always cheaper than fetching off in random
directions at random points. Because fetching sequentially uses
resources
of I-TLB, I$L1 and prefetch buffer that are already loaded,
Same motivation for My 66000 Predication scheme--do not disrupt the
FETCHer.
whereas non-sequential mispredictions will initiate unnecessary loads
of essentially random information.
I am sitting around wondering if ASID might be used to avoid resetting
the predictors. If( myASID != storedASID ) don't use prediction.
This avoids the need to set 32KB of 4-state predictors to weakly
untaken. And then store 1 ASID per 512-bits of predictors, for a
3% overhead. Mc 68030 TLB did something like this.
It also depends on how quickly in the pipeline the mispredict can be
detected, some can be detected at Decode and others not until execute,
and how quickly unnecessary pending loads can be canceled and the
correct flow reestablished.
MitchAlsup1 wrote:
On Sat, 27 Jul 2024 19:31:39 +0000, EricP wrote:
I am sitting around wondering if ASID might be used to avoid resetting
the predictors. If( myASID != storedASID ) don't use prediction.
This avoids the need to set 32KB of 4-state predictors to weakly
untaken. And then store 1 ASID per 512-bits of predictors, for a
3% overhead. Mc 68030 TLB did something like this.
If the (12 bit?) ASID is for 512 bit blocks then the overhead could
be acceptable. But some of these BP structures, BTB, are set assoc.
and would require a sequential scan to reset the entries for an ASID.
It also depends on how quickly in the pipeline the mispredict can be
detected, some can be detected at Decode and others not until execute,
and how quickly unnecessary pending loads can be canceled and the
correct flow reestablished.
The mispredict logic also needs to be able to abort an I-TLB table walk
if BP erroneously tells Fetch to follow the wrong path.
Its probably not possible to abort an I$L1 cache miss as a miss buffer
would already be allocated for it. But I would not want the I$L1 cache
port to be tied up waiting for an unneeded prefetch read.
EricP <ThatWouldBeTelling@thevillage.com> writes:
I have difficulty believing that the branch predictor values from some
thread in one process would be anything but a *negative* impact on a
random different thread in a different process.
This sounds very similar to the problem of aliasing of two different
branches in the branch predictor. The branch predictor researchers
have looked into that, and found that it does not pay off to tag
predictions with the branches they are for. The aliased branch is at
least as likely to benefit from the prediction as it is to suffer from interference; as a further measure agree predictors [sprangle+97] were proposed; I don't know if they ever made it into practical
application.
As for the idea of erasing the branch predictor on process switch:
Consider the case where your CPU-bound process has to make way for a
short time slice of an I/O-bound process, and once that has submitted
its next synchronous I/O request, your CPU-bound process gets control
again. The I/O bound process tramples only over a small part of
branch predictor state, but if you erase on process switch, all the
branch preductor state will be gone when the CPU-bound process gets
the CPU core again. That's the reason why we do not erase
microarchitectural state on context switch; we do it neither for
caches nor for branch predictors.
Moreover, another process will likely use some of the same libraries
the earlier process used, and will benefit from having the branches in
the library predicted (unless ASLR prevents them from using the same
entries in the branch predictor).
@InProceedings{sprangle+97,
author = {Eric Sprangle and Robert S. Chappell and Mitch Alsup
and Yale N. Patt},
title = {The Agree Predictor: A Mechanism for Reducing
Negative Branch History Interference},
crossref = {isca97},
pages = {284--291},
annote = {Reduces the number of conflict mispredictions by
having the predictor entries predict whether or not
some other predictor (say, a static predictor) is
correct. This increases the chance that the
predicted direction is correct in case of a
conflict.}
}
@Proceedings{isca97,
title = "$24^\textit{th}$ Annual International Symposium on Computer Architecture",
booktitle = "$24^\textit{th}$ Annual International Symposium on Computer Architecture",
year = "1997",
key = "ISCA 24",
}
Because if you retain
the predictor values then the new thread has to unlearn what it learned,
before it starts to learn values for the new thread. Whereas if the
predictor is flushed it can immediately learn its own values.
Unlearn? The only thing I can think about in that direction is that a two-bit counter (for some history and maybe branch address) happens to
be in a state where two instead of one misprediction is necessary
before the prediction changes. Anyway, branch prediction research has
looked into the issue a long time ago and found that erasing on
context switch is a net loss.
- anton
Anton Ertl wrote:
EricP <ThatWouldBeTelling@thevillage.com> writes:
I have difficulty believing that the branch predictor values from some
thread in one process would be anything but a *negative* impact on a
random different thread in a different process.
This sounds very similar to the problem of aliasing of two different
branches in the branch predictor. The branch predictor researchers
have looked into that, and found that it does not pay off to tag
predictions with the branches they are for. The aliased branch is at
least as likely to benefit from the prediction as it is to suffer from
interference; as a further measure agree predictors [sprangle+97] were
proposed; I don't know if they ever made it into practical
application.
Yes I assume aliasing is possible as one source of erroneous
predictions.
I view the branch predictor (BP) as a black box attached to the Fetch
stage.
Fetch feeds BP with the current Fetch RIP virtual address (FetRipVir)
and
gets back a hit/miss signal, if a hit then the kind of branch/jump it is supposed to be, and a target virtual address (TargRipVir) to fetch from
next.
As the BP would use a subset of the FetRipVir bits to index its tables,
or the equivalent in the Branch History Register (BHR),
then its possible for BP to erroneously send Fetch off on a wild goose
chase,
triggering I-TLB table walks and/or I-cache misses.
A similar effect to aliasing occurs on address space switch because the
table indexes for one virtual address space and PHT are completely
different.
Then it becomes a matter of how quickly the mistake can be detected,
the previous path canceled and the correct path established, at what
cost.
As for the idea of erasing the branch predictor on process switch:
Consider the case where your CPU-bound process has to make way for a
short time slice of an I/O-bound process, and once that has submitted
its next synchronous I/O request, your CPU-bound process gets control
again. The I/O bound process tramples only over a small part of
branch predictor state, but if you erase on process switch, all the
branch preductor state will be gone when the CPU-bound process gets
the CPU core again. That's the reason why we do not erase
microarchitectural state on context switch; we do it neither for
caches nor for branch predictors.
Caches are not erased because they (a) usually are physically indexed
and
physically tagged and (b) use all physical address bits in the
index-tag.
If a cache is virtually indexed and tagged then it must be flushed on
address space switch, or entries also tagged with an ASID.
Where branch predictors use addresses, they use fetch virtual addresses
and any tables indexed by those VA will be invalid in a different
process.
Also to save space they often don't use the full address bits but a
subset
which leads to aliasing of BP info for different instructions.
Moreover, another process will likely use some of the same libraries
the earlier process used, and will benefit from having the branches in
the library predicted (unless ASLR prevents them from using the same
entries in the branch predictor).
Even assuming this effect is significant I don't think it justifies
opening a security hole by retaining the BP tables, any more than it
would justify retaining the TLB for the prior address space.
@InProceedings{sprangle+97,
author = {Eric Sprangle and Robert S. Chappell and Mitch Alsup
and Yale N. Patt},
title = {The Agree Predictor: A Mechanism for Reducing
Negative Branch History Interference},
crossref = {isca97},
pages = {284--291},
annote = {Reduces the number of conflict mispredictions by
having the predictor entries predict whether or not
some other predictor (say, a static predictor) is
correct. This increases the chance that the
predicted direction is correct in case of a
conflict.}
}
@Proceedings{isca97,
title = "$24^\textit{th}$ Annual International Symposium on Computer >> Architecture",
booktitle = "$24^\textit{th}$ Annual International Symposium on
Computer Architecture",
year = "1997",
key = "ISCA 24",
}
Because if you retain
the predictor values then the new thread has to unlearn what it learned, >>> before it starts to learn values for the new thread. Whereas if the
predictor is flushed it can immediately learn its own values.
Unlearn? The only thing I can think about in that direction is that a
two-bit counter (for some history and maybe branch address) happens to
be in a state where two instead of one misprediction is necessary
before the prediction changes. Anyway, branch prediction research has
looked into the issue a long time ago and found that erasing on
context switch is a net loss.
- anton
In the above Agree Predictor the two-bit Pattern History Table (PHT)
is indexed by the multi-bit Branch History Table (BHT),
and the BHT must be retrained before it generates useful PHT indexes.
The Branch Bias Table (BBT) is one bit indexed by the lower bits of the
Fetch RIP XOR'ed with the BHT. Even though this is only one bit to
toggle
to train it, the XOR with BHT means it too will only generate useful
indexes
to select that one bit after the BHT is retrained. Until then it will be toggling the wrong bias bits.
In other BP there are set associative Branch Target Buffers (BTB)
that remember the target virtual address a branch will go to.
Same for Indirect Branch Predictor, and CALL/RET stack predictor.
All of these would repeatedly send Fetch off on a wild goose chases
until the current execution detects the mistakes, squashes any
instructions
fetched along the erroneous path, cancels any pending loads it
triggered,
and overwrites these entries.
On Wed, 24 Jul 2024 21:05:51 +0000, Thomas Koenig wrote:
This time, it's "Indirector".
https://thehackernews.com/2024/07/new-intel-cpu-vulnerability-indirector.html
The same article also mentions that ARM have their own issue
with the Memory Tagging Extension.
Another attack My 66000 is not sensitive to.
{{The Simpson's Nelson:: "Ha Ha"}}
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 546 |
Nodes: | 16 (3 / 13) |
Uptime: | 05:48:20 |
Calls: | 10,388 |
Calls today: | 3 |
Files: | 14,061 |
Messages: | 6,416,801 |