Store Exclusive Register Halfword stores a halfword from a register to memory if the PE has exclusive access to the memory address, and returns a status value of 0 if the store was successful, or of 1 if no store was performed. See Synchronization and semaphores. The memory access is atomic.
For information about memory accesses see Load/Store addressing modes.
31 | 30 | 29 | 28 | 27 | 26 | 25 | 24 | 23 | 22 | 21 | 20 | 19 | 18 | 17 | 16 | 15 | 14 | 13 | 12 | 11 | 10 | 9 | 8 | 7 | 6 | 5 | 4 | 3 | 2 | 1 | 0 |
0 | 1 | 0 | 0 | 1 | 0 | 0 | 0 | 0 | 0 | 0 | Rs | 0 | (1) | (1) | (1) | (1) | (1) | Rn | Rt | ||||||||||||
size | L | o0 | Rt2 |
integer n = UInt(Rn); integer t = UInt(Rt); integer t2 = UInt(Rt2); // ignored by load/store single register integer s = UInt(Rs); // ignored by all loads and store-release AccType acctype = if o0 == '1' then AccType_ORDEREDATOMIC else AccType_ATOMIC; boolean pair = FALSE; MemOp memop = if L == '1' then MemOp_LOAD else MemOp_STORE; integer elsize = 8 << UInt(size); integer regsize = if elsize == 64 then 64 else 32; integer datasize = if pair then elsize * 2 else elsize; boolean tag_checked = n != 31; boolean rt_unknown = FALSE; boolean rn_unknown = FALSE; if memop == MemOp_LOAD && pair && t == t2 then Constraint c = ConstrainUnpredictable(Unpredictable_LDPOVERLAP); assert c IN {Constraint_UNKNOWN, Constraint_UNDEF, Constraint_NOP}; case c of when Constraint_UNKNOWN rt_unknown = TRUE; // result is UNKNOWN when Constraint_UNDEF UNDEFINED; when Constraint_NOP EndOfInstruction(); if memop == MemOp_STORE then if s == t || (pair && s == t2) then Constraint c = ConstrainUnpredictable(Unpredictable_DATAOVERLAP); assert c IN {Constraint_UNKNOWN, Constraint_UNDEF, Constraint_NOP}; case c of when Constraint_UNKNOWN rt_unknown = TRUE; // store UNKNOWN value when Constraint_UNDEF UNDEFINED; when Constraint_NOP EndOfInstruction(); if s == n && n != 31 then Constraint c = ConstrainUnpredictable(Unpredictable_BASEOVERLAP); assert c IN {Constraint_UNKNOWN, Constraint_UNDEF, Constraint_NOP}; case c of when Constraint_UNKNOWN rn_unknown = TRUE; // address is UNKNOWN when Constraint_UNDEF UNDEFINED; when Constraint_NOP EndOfInstruction();
<Wt> |
Is the 32-bit name of the general-purpose register to be transferred, encoded in the "Rt" field. |
<Xn|SP> |
Is the 64-bit name of the general-purpose base register or stack pointer, encoded in the "Rn" field. |
Aborts and alignment
If a synchronous Data Abort exception is generated by the execution of this instruction:
A non halfword-aligned memory address causes an Alignment fault Data Abort exception to be generated, subject to the following rules:
If AArch64.ExclusiveMonitorsPass() returns FALSE and the memory address, if accessed, would generate a synchronous Data Abort exception, it is implementation defined whether the exception is generated.
bits(64) address; bits(datasize) data; constant integer dbytes = datasize DIV 8; if HaveMTE2Ext() then SetTagCheckedInstruction(tag_checked); if n == 31 then CheckSPAlignment(); address = SP[]; elsif rn_unknown then address = bits(64) UNKNOWN; else address = X[n]; case memop of when MemOp_STORE if rt_unknown then data = bits(datasize) UNKNOWN; elsif pair then bits(datasize DIV 2) el1 = X[t]; bits(datasize DIV 2) el2 = X[t2]; data = if BigEndian(acctype) then el1 : el2 else el2 : el1; else data = X[t]; bit status = '1'; // Check whether the Exclusives monitors are set to include the // physical memory locations corresponding to virtual address // range [address, address+dbytes-1]. if AArch64.ExclusiveMonitorsPass(address, dbytes) then // This atomic write will be rejected if it does not refer // to the same physical locations after address translation. Mem[address, dbytes, acctype] = data; status = ExclusiveMonitorsStatus(); X[s] = ZeroExtend(status, 32); when MemOp_LOAD // Tell the Exclusives monitors to record a sequence of one or more atomic // memory reads from virtual address range [address, address+dbytes-1]. // The Exclusives monitor will only be set if all the reads are from the // same dbytes-aligned physical address, to allow for the possibility of // an atomicity break if the translation is changed between reads. AArch64.SetExclusiveMonitors(address, dbytes); if pair then if rt_unknown then // ConstrainedUNPREDICTABLE case X[t] = bits(datasize) UNKNOWN; // In this case t = t2 elsif elsize == 32 then // 32-bit load exclusive pair (atomic) data = Mem[address, dbytes, acctype]; if BigEndian(acctype) then X[t] = data<datasize-1:elsize>; X[t2] = data<elsize-1:0>; else X[t] = data<elsize-1:0>; X[t2] = data<datasize-1:elsize>; else // elsize == 64 // 64-bit load exclusive pair (not atomic), // but must be 128-bit aligned if address != Align(address, dbytes) then iswrite = FALSE; secondstage = FALSE; AArch64.Abort(address, AlignmentFault(acctype, iswrite, secondstage)); X[t] = Mem[address + 0, 8, acctype]; X[t2] = Mem[address + 8, 8, acctype]; else data = Mem[address, dbytes, acctype]; X[t] = ZeroExtend(data, regsize);
If PSTATE.DIT is 1, the timing of this instruction is insensitive to the value of the data being loaded or stored.
Internal version only: isa v33.11seprel, AdvSIMD v29.05, pseudocode v2021-09_rel, sve v2021-09_rc3d ; Build timestamp: 2021-10-06T11:41
Copyright © 2010-2021 Arm Limited or its affiliates. All rights reserved. This document is Non-Confidential.