**by Pengo Wray**

WebAssembly is an open, industry-wide effort to bring a safe, efficient assembly language to the web. WebAssembly technology is developed collaboratively by major browser vendors including Mozilla, Google, Microsoft, and Apple. WebAssembly modules can be downloaded and executed by the majority of browsers in use today.

_0 | _1 | _2 | _3 | _4 | _5 | _6 | _7 | _8 | _9 | _A | _B | _C | _D | _E | _F | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|

0_ | unreachable | nop | block | loop | if | else | *try | *catch | *throw | *rethrow | *throw_ref | end | br | br_if | br_table | return |

1_ | call | call_indirect | *return_call | *return_call_indirect | *call_ref | *return_call_ref | *delegate | *catch_all | drop | select | *select t | *try_table | ||||

2_ | local.get | local.set | local.tee | global.get | global.set | *table.get | *table.set | i32.load | i64.load | f32.load | f64.load | i32.load8_s | i32.load8_u | i32.load16_s | i32.load16_u | |

3_ | i64.load8_s | i64.load8_u | i64.load16_s | i64.load16_u | i64.load32_s | i64.load32_u | i32.store | i64.store | f32.store | f64.store | i32.store8 | i32.store16 | i64.store8 | i64.store16 | i64.store32 | memory.size |

4_ | memory.grow | i32.const | i64.const | f32.const | f64.const | i32.eqz | i32.eq | i32.ne | i32.lt_s | i32.lt_u | i32.gt_s | i32.gt_u | i32.le_s | i32.le_u | i32.ge_s | i32.ge_u |

5_ | i64.eqz | i64.eq | i64.ne | i64.lt_s | i64.lt_u | i64.gt_s | i64.gt_u | i64.le_s | i64.le_u | i64.ge_s | i64.ge_u | f32.eq | f32.ne | f32.lt | f32.gt | f32.le |

6_ | f32.ge | f64.eq | f64.ne | f64.lt | f64.gt | f64.le | f64.ge | i32.clz | i32.ctz | i32.popcnt | i32.add | i32.sub | i32.mul | i32.div_s | i32.div_u | i32.rem_s |

7_ | i32.rem_u | i32.and | i32.or | i32.xor | i32.shl | i32.shr_s | i32.shr_u | i32.rotl | i32.rotr | i64.clz | i64.ctz | i64.popcnt | i64.add | i64.sub | i64.mul | i64.div_s |

8_ | i64.div_u | i64.rem_s | i64.rem_u | i64.and | i64.or | i64.xor | i64.shl | i64.shr_s | i64.shr_u | i64.rotl | i64.rotr | f32.abs | f32.neg | f32.ceil | f32.floor | f32.trunc |

9_ | f32.nearest | f32.sqrt | f32.add | f32.sub | f32.mul | f32.div | f32.min | f32.max | f32.copysign | f64.abs | f64.neg | f64.ceil | f64.floor | f64.trunc | f64.nearest | f64.sqrt |

A_ | f64.add | f64.sub | f64.mul | f64.div | f64.min | f64.max | f64.copysign | i32.wrap_i64 | i32.trunc_f32_s | i32.trunc_f32_u | i32.trunc_f64_s | i32.trunc_f64_u | i64.extend_i32_s | i64.extend_i32_u | i64.trunc_f32_s | i64.trunc_f32_u |

B_ | i64.trunc_f64_s | i64.trunc_f64_u | f32.convert_i32_s | f32.convert_i32_u | f32.convert_i64_s | f32.convert_i64_u | f32.demote_f64 | f64.convert_i32_s | f64.convert_i32_u | f64.convert_i64_s | f64.convert_i64_u | f64.promote_f32 | i32.reinterpret_f32 | i64.reinterpret_f64 | f32.reinterpret_i32 | f64.reinterpret_i64 |

C_ | *i32.extend8_s | *i32.extend16_s | *i64.extend8_s | *i64.extend16_s | *i64.extend32_s | |||||||||||

D_ | *ref.null | *ref.is_null | *ref.func | *ref.as_non_null | *br_on_null | *ref.eq | *br_on_non_null | |||||||||

E_ | ||||||||||||||||

F_ | ⭕ GC ➰ Str |
⭐ FC |
🌀 SIMD |
🧵 Threads |

Proposal to add garbage collection (GC) support.

_0 | _1 | _2 | _3 | _4 | _5 | _6 | _7 | _8 | _9 | _A | _B | _C | _D | _E | _F | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|

FB 0_ | struct.new_canon | struct.new_canon_default | struct.get | struct.get_s | struct.get_u | struct.set | ||||||||||

FB 1_ | array.new_canon | array.new_canon_default | array.get | array.get_s | array.get_u | array.set | array.len | array.new_canon_fixed | array.new_canon_data | array.new_canon_elem | ||||||

FB 2_ | i31.new | i31.get_s | i31.get_u | |||||||||||||

FB 3_ | ||||||||||||||||

FB 4_ | ref.test | ref.cast | br_on_cast | br_on_cast_fail | ref.test | ref.cast | br_on_cast | br_on_cast_fail | ||||||||

FB 5_ | ||||||||||||||||

FB 6_ | ||||||||||||||||

FB 7_ | extern.internalize | extern.externalize |

This is a phase 1 proposal and may change in future. [As of 2022]

_0 | _1 | _2 | _3 | _4 | _5 | _6 | _7 | _8 | _9 | _A | _B | _C | _D | _E | _F | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|

FB 8_ | string.new_utf8 | string.new_wtf16 | string.const | string.measure_utf8 | string.measure_wtf8 | string.measure_wtf16 | string.encode_utf8 | string.encode_wtf16 | string.concat | string.eq | string.is_usv_sequence | string.new_lossy_utf8 | string.new_wtf8 | string.encode_lossy_utf8 | string.encode_wtf8 | |

FB 9_ | string.as_wtf8 | stringview_wtf8.advance | stringview_wtf8.encode_utf8 | stringview_wtf8.slice | stringview_wtf8.encode_lossy_utf8 | stringview_wtf8.encode_wtf8 | string.as_wtf16 | stringview_wtf16.length | stringview_wtf16.get_codeunit | stringview_wtf16.encode | stringview_wtf16.slice | |||||

FB A_ | string.as_iter | stringview_iter.next | stringview_iter.advance | stringview_iter.rewind | stringview_iter.slice | |||||||||||

FB B_ | string.new_utf8_array | string.new_wtf16_array | string.encode_utf8_array | string.encode_wtf16_array | string.new_lossy_utf8_array | string.new_wtf8_array | string.encode_lossy_utf8_array | string.encode_wtf8_array |

Multibyte instructions beginning with 0xFC.

_0 | _1 | _2 | _3 | _4 | _5 | _6 | _7 | _8 | _9 | _A | _B | _C | _D | _E | _F | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|

FC 0_ | i32.trunc_sat_f32_s | i32.trunc_sat_f32_u | i32.trunc_sat_f64_s | i32.trunc_sat_f64_u | i64.trunc_sat_f32_s | i64.trunc_sat_f32_u | i64.trunc_sat_f64_s | i64.trunc_sat_f64_u | memory.init | data.drop | memory.copy | memory.fill | table.init | elem.drop | table.copy | table.grow |

FC 1_ | table.size | table.fill |

SIMD (single instruction, multiple data) instructions begin with 0xFD.

and Relaxed SIMD prototype opcodes

_0 | _1 | _2 | _3 | _4 | _5 | _6 | _7 | _8 | _9 | _A | _B | _C | _D | _E | _F | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|

FD 0_ | v128.load | v128.load8x8_s | v128.load8x8_u | v128.load16x4_s | v128.load16x4_u | v128.load32x2_s | v128.load32x2_u | v128.load8_splat | v128.load16_splat | v128.load32_splat | v128.load64_splat | v128.store | v128.const | i8x16.shuffle | i8x16.swizzle | i8x16.splat |

FD 1_ | i16x8.splat | i32x4.splat | i64x2.splat | f32x4.splat | f64x2.splat | i8x16.extract_lane_s | i8x16.extract_lane_u | i8x16.replace_lane | i16x8.extract_lane_s | i16x8.extract_lane_u | i16x8.replace_lane | i32x4.extract_lane | i32x4.replace_lane | i64x2.extract_lane | i64x2.replace_lane | f32x4.extract_lane |

FD 2_ | f32x4.replace_lane | f64x2.extract_lane | f64x2.replace_lane | i8x16.eq | i8x16.ne | i8x16.lt_s | i8x16.lt_u | i8x16.gt_s | i8x16.gt_u | i8x16.le_s | i8x16.le_u | i8x16.ge_s | i8x16.ge_u | i16x8.eq | i16x8.ne | i16x8.lt_s |

FD 3_ | i16x8.lt_u | i16x8.gt_s | i16x8.gt_u | i16x8.le_s | i16x8.le_u | i16x8.ge_s | i16x8.ge_u | i32x4.eq | i32x4.ne | i32x4.lt_s | i32x4.lt_u | i32x4.gt_s | i32x4.gt_u | i32x4.le_s | i32x4.le_u | i32x4.ge_s |

FD 4_ | i32x4.ge_u | f32x4.eq | f32x4.ne | f32x4.lt | f32x4.gt | f32x4.le | f32x4.ge | f64x2.eq | f64x2.ne | f64x2.lt | f64x2.gt | f64x2.le | f64x2.ge | v128.not | v128.and | v128.andnot |

FD 5_ | v128.or | v128.xor | v128.bitselect | v128.any_true | v128.load8_lane | v128.load16_lane | v128.load32_lane | v128.load64_lane | v128.store8_lane | v128.store16_lane | v128.store32_lane | v128.store64_lane | v128.load32_zero | v128.load64_zero | f32x4.demote_f64x2_zero | f64x2.promote_low_f32x4 |

FD 6_ | i8x16.abs | i8x16.neg | i8x16.popcnt | i8x16.all_true | i8x16.bitmask | i8x16.narrow_i16x8_s | i8x16.narrow_i16x8_u | f32x4.ceil | f32x4.floor | f32x4.trunc | f32x4.nearest | i8x16.shl | i8x16.shr_s | i8x16.shr_u | i8x16.add | i8x16.add_sat_s |

FD 7_ | i8x16.add_sat_u | i8x16.sub | i8x16.sub_sat_s | i8x16.sub_sat_u | f64x2.ceil | f64x2.floor | i8x16.min_s | i8x16.min_u | i8x16.max_s | i8x16.max_u | f64x2.trunc | i8x16.avgr_u | i16x8.extadd_pairwise_i8x16_s | i16x8.extadd_pairwise_i8x16_u | i32x4.extadd_pairwise_i16x8_s | i32x4.extadd_pairwise_i16x8_u |

FD 8_ | i16x8.abs | i16x8.neg | i16x8.q15mulr_sat_s | i16x8.all_true | i16x8.bitmask | i16x8.narrow_i32x4_s | i16x8.narrow_i32x4_u | i16x8.extend_low_i8x16_s | i16x8.extend_high_i8x16_s | i16x8.extend_low_i8x16_u | i16x8.extend_high_i8x16_u | i16x8.shl | i16x8.shr_s | i16x8.shr_u | i16x8.add | i16x8.add_sat_s |

FD 9_ | i16x8.add_sat_u | i16x8.sub | i16x8.sub_sat_s | i16x8.sub_sat_u | f64x2.nearest | i16x8.mul | i16x8.min_s | i16x8.min_u | i16x8.max_s | i16x8.max_u | i16x8.avgr_u | i16x8.extmul_low_i8x16_s | i16x8.extmul_high_i8x16_s | i16x8.extmul_low_i8x16_u | i16x8.extmul_high_i8x16_u | |

FD A_ | i32x4.abs | i32x4.neg | *i8x16.relaxed_swizzle | i32x4.all_true | i32x4.bitmask | *i32x4.relaxed_trunc_f32x4_s | *i32x4.relaxed_trunc_f32x4_u | i32x4.extend_low_i16x8_s | i32x4.extend_high_i16x8_s | i32x4.extend_low_i16x8_u | i32x4.extend_high_i16x8_u | i32x4.shl | i32x4.shr_s | i32x4.shr_u | i32x4.add | *f32x4.relaxed_madd |

FD B_ | *f32x4.relaxed_nmadd | i32x4.sub | *i8x16.relaxed_laneselect | *i16x8.relaxed_laneselect | *f32x4.relaxed_min | i32x4.mul | i32x4.min_s | i32x4.min_u | i32x4.max_s | i32x4.max_u | i32x4.dot_i16x8_s | i32x4.extmul_low_i16x8_s | i32x4.extmul_high_i16x8_s | i32x4.extmul_low_i16x8_u | i32x4.extmul_high_i16x8_u | |

FD C_ | i64x2.abs | i64x2.neg | i64x2.all_true | i64x2.bitmask | *i32x4.relaxed_trunc_f64x2_s_zero | *i32x4.relaxed_trunc_f64x2_u_zero | i64x2.extend_low_i32x4_s | i64x2.extend_high_i32x4_s | i64x2.extend_low_i32x4_u | i64x2.extend_high_i32x4_u | i64x2.shl | i64x2.shr_s | i64x2.shr_u | i64x2.add | *f64x2.relaxed_madd | |

FD D_ | *f64x2.relaxed_nmadd | i64x2.sub | *i32x4.relaxed_laneselect | *i64x2.relaxed_laneselect | *f64x2.relaxed_min | i64x2.mul | i64x2.eq | i64x2.ne | i64x2.lt_s | i64x2.gt_s | i64x2.le_s | i64x2.ge_s | i64x2.extmul_low_i32x4_s | i64x2.extmul_high_i32x4_s | i64x2.extmul_low_i32x4_u | i64x2.extmul_high_i32x4_u |

FD E_ | f32x4.abs | f32x4.neg | *f32x4.relaxed_max | f32x4.sqrt | f32x4.add | f32x4.sub | f32x4.mul | f32x4.div | f32x4.min | f32x4.max | f32x4.pmin | f32x4.pmax | f64x2.abs | f64x2.neg | *f64x2.relaxed_max | f64x2.sqrt |

FD F_ | f64x2.add | f64x2.sub | f64x2.mul | f64x2.div | f64x2.min | f64x2.max | f64x2.pmin | f64x2.pmax | i32x4.trunc_sat_f32x4_s | i32x4.trunc_sat_f32x4_u | f32x4.convert_i32x4_s | f32x4.convert_i32x4_u | i32x4.trunc_sat_f64x2_s_zero | i32x4.trunc_sat_f64x2_u_zero | f64x2.convert_low_i32x4_s | f64x2.convert_low_i32x4_u |

0xFD 0x1__

_0 | _1 | _2 | _3 | _4 | _5 | _6 | _7 | _8 | _9 | _A | _B | _C | _D | _E | _F | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|

FD 10_ | i8x16.relaxed_swizzle | i32x4.relaxed_trunc_f32x4_s | i32x4.relaxed_trunc_f32x4_u | i32x4.relaxed_trunc_f64x2_s_zero | i32x4.relaxed_trunc_f64x2_u_zero | f32x4.relaxed_madd | f32x4.relaxed_nmadd | f64x2.relaxed_madd | f64x2.relaxed_nmadd | i8x16.relaxed_laneselect | i16x8.relaxed_laneselect | i32x4.relaxed_laneselect | i64x2.relaxed_laneselect | f32x4.relaxed_min | f32x4.relaxed_max | f64x2.relaxed_min |

FD 11_ | f64x2.relaxed_max | i16x8.relaxed_q15mulr_s | i16x8.relaxed_dot_i8x16_i7x16_s | i32x4.relaxed_dot_i8x16_i7x16_add_s | f32x4.relaxed_dot_bf16x8_add_f32x4 | |||||||||||

FD 12_ |

Multibyte instructions beginning with 0xFE.

_0 | _1 | _2 | _3 | _4 | _5 | _6 | _7 | _8 | _9 | _A | _B | _C | _D | _E | _F | |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|

FE 0_ | memory.atomic.notify | memory.atomic.wait32 | memory.atomic.wait64 | atomic.fence | ||||||||||||

FE 1_ | i32.atomic.load | i64.atomic.load | i32.atomic.load8_u | i32.atomic.load16_u | i64.atomic.load8_u | i64.atomic.load16_u | i64.atomic.load32_u | i32.atomic.store | i64.atomic.store | i32.atomic.store8 | i32.atomic.store16 | i64.atomic.store8 | i64.atomic.store16 | i64.atomic.store32 | i32.atomic.rmw.add | i64.atomic.rmw.add |

FE 2_ | i32.atomic.rmw8.add_u | i32.atomic.rmw16.add_u | i64.atomic.rmw8.add_u | i64.atomic.rmw16.add_u | i64.atomic.rmw32.add_u | i32.atomic.rmw.sub | i64.atomic.rmw.sub | i32.atomic.rmw8.sub_u | i32.atomic.rmw16.sub_u | i64.atomic.rmw8.sub_u | i64.atomic.rmw16.sub_u | i64.atomic.rmw32.sub_u | i32.atomic.rmw.and | i64.atomic.rmw.and | i32.atomic.rmw8.and_u | i32.atomic.rmw16.and_u |

FE 3_ | i64.atomic.rmw8.and_u | i64.atomic.rmw16.and_u | i64.atomic.rmw32.and_u | i32.atomic.rmw.or | i64.atomic.rmw.or | i32.atomic.rmw8.or_u | i32.atomic.rmw16.or_u | i64.atomic.rmw8.or_u | i64.atomic.rmw16.or_u | i64.atomic.rmw32.or_u | i32.atomic.rmw.xor | i64.atomic.rmw.xor | i32.atomic.rmw8.xor_u | i32.atomic.rmw16.xor_u | i64.atomic.rmw8.xor_u | i64.atomic.rmw16.xor_u |

FE 4_ | i64.atomic.rmw32.xor_u | i32.atomic.rmw.xchg | i64.atomic.rmw.xchg | i32.atomic.rmw8.xchg_u | i32.atomic.rmw16.xchg_u | i64.atomic.rmw8.xchg_u | i64.atomic.rmw16.xchg_u | i64.atomic.rmw32.xchg_u | i32.atomic.rmw.cmpxchg | i64.atomic.rmw.cmpxchg | i32.atomic.rmw8.cmpxchg_u | i32.atomic.rmw16.cmpxchg_u | i64.atomic.rmw8.cmpxchg_u | i64.atomic.rmw16.cmpxchg_u | i64.atomic.rmw32.cmpxchg_u |

The unreachable instruction causes an unconditional trap.

A trap immediately aborts execution. Traps cannot be handled by WebAssembly code, but are reported to the outside environment, where they typically can be caught.

**stack-polymorphic**: performs an *unconditional control transfer*.

The nop instruction does nothing.

[t^{?}]
### Followed by:

### Stack:

[] → [t^{∗}]

the beginning of a block construct, a sequence of instructions with a label at the end.

- i8
*rt*: blocktype — 0x40 = [], 0x7F = [i32], 0x7E = [i64], 0x7D = [f32], 0x7C = [f64] - instructions
- 0x0B — end

The result type of the instructions must match the blocktype.

The *block*, *loop* and *if* instructions are structured instructions. They bracket nested sequences of instructions, called blocks, terminated with, or separated by, *end* or *else* pseudo-instructions. They must be well-nested.

[t^{?}]
### Followed by:

### Stack:

[] → [t^{∗}]

a block with a label at the beginning which may be used to form loops

- i8
*rt*: blocktype — 0x40 = [], 0x7F = [i32], 0x7E = [i64], 0x7D = [f32], 0x7C = [f64] - instructions
- 0x0B — end

[t^{?}]
### Followed by:

### Stack:

[i32] → [t^{∗}]

i32*c* → [t^{∗}]

if*c* is non-zero, enter block instructions_{1}, else enter block instructions_{2}

the beginning of an if construct with an implicit *then* block

- i8
*rt*: blocktype — 0x40 = [], 0x7F = [i32], 0x7E = [i64], 0x7D = [f32], 0x7C = [f64] - instructions
_{1} - 0x0B — end

- i8
*rt*: blocktype - instructions
_{1} - 0x05 — else
- instructions
_{2} - 0x0B — end

i32

if

Marks the else block of an *if*.

begins a block which can handle thrown exceptions

*Exception Handling Proposal*

begins the catch block of the try block

*Exception Handling Proposal*

Creates an exception defined by the tag and then throws it

*Exception Handling Proposal*

Pops the exnref on top of the stack and throws it

*Exception Handling Proposal*

Pops the exnref on top of the stack and throws it

*Exception Handling Proposal*

Marks the end of a *block*, *loop*, *if*, or function.

l
### Followed by:

u32 *l* : labelidx
### Stack:

[t^{∗}_{1} t^{?}] → [t^{∗}_{2}]

Branch to a given label in an enclosing construct.

Performs an unconditional branch.

Label 0 refers to the innermost structured control instruction enclosing the referring branch instruction, while increasing indices refer to those farther out.

A branch targeting a *block* or *if* behaves like a break statement in most C-like languages, while a branch targeting a *loop* behaves like a continue statement.

**stack-polymorphic**: performs an *unconditional control transfer*.

l
### Followed by:

u32 *l* : labelidx
### Stack:

[t^{?} i32] → [t^{?}]

Performs a conditional branch, branching if i32 *c* is non-zero.

Conditionally branch to a given label in an enclosing construct.

l* l
### Followed by:

### Stack:

[t^{∗}_{1} t^{?} i32] → [t^{∗}_{2}]

A jump table which jumps to a label in an enclosing construct.

Performs an indirect branch through an operand indexing into the label vector that is an immediate to the instruction, or to a default target if the operand is out of bounds.

- u32
*l**: vec( labelidx ) - u32
*l*: labelidx

**stack-polymorphic**: performs an *unconditional control transfer*.

return zero or more values from this function.

The return instruction is a shortcut for an unconditional branch to the outermost block, which implicitly is the body of the current function.

**stack-polymorphic**: performs an *unconditional control transfer*.

x
### Followed by:

u32 *x* : funcidx
### Stack:

[t^{∗}_{1}] → [t^{∗}_{2}]

The call instruction invokes another function, consuming the necessary arguments from the stack and returning the result values of the call.

x
### Followed by:

### Stack:

[t^{?} i32] → [t^{?}]

The call_indirect instruction calls a function indirectly through an operand indexing into a table.

- u32
*x*: typeidx - 0x00 — zero byte

In future versions of WebAssembly, the zero byte may be used to index additional tables.

the tail-call version of call

*Tail calls proposal*

the tail-call version of call

*Tail calls proposal*

call a function through a ref $t

*Typed Function References Proposal*

*Typed Function References Proposal*

begins the delegate block of the try block

*Exception Handling Proposal*

begins the catch_all block of the try block

*Exception Handling Proposal*

The drop instruction simply throws away a single operand.

The select instruction selects one of its first two operands based on whether its third operand is zero or not.

Only annotated 'select' can be used with reference types.

*Reference Types Proposal*

begins a block which can handle thrown exceptions

*Exception Handling Proposal*

x
### Followed by:

u32 *x* : localidx
### Stack:

[] → [t]

This instruction gets the value of a variable.

The index space for locals is only accessible inside a function and includes the parameters of that function, which precede the local variables.

The *locals* context refers to the list of locals declared in the current function (including parameters), represented by their value type.

x
### Followed by:

u32 *x* : localidx
### Stack:

[t] → []

This instruction sets the value of a variable.

The index space for locals is only accessible inside a function and includes the parameters of that function, which precede the local variables.

x
### Followed by:

u32 *x* : localidx
### Stack:

[t] → [t]

The local.tee instruction is like local.set but also returns its argument.

The index space for locals is only accessible inside a function and includes the parameters of that function, which precede the local variables.

x
### Followed by:

u32 *x* : globalidx
### Stack:

[] → [t]

This instruction gets the value of a variable.

The *globals* context is the list of globals declared in the current module, represented by their global type.

x
### Followed by:

u32 *x* : globalidx
### Stack:

[t] → []

This instruction sets the value of a variable

Access tables

*Reference Types Proposal*

Access tables

*Reference Types Proposal*

m
### Followed by:

*m* : memarg { u32 offset, u32 align }
### Stack:

[i32] → [i32]

i : address-operand → c : result

load 4 bytes as i32.

i : address-operand → c : result

Memory is accessed with load and store instructions for the different value types. They all take a memory immediate *memarg* that contains an address offset and the expected alignment.

The immediate value memarg.align is an alignment hint about the effective-address. It is a power-of 2 encoded as log2(memarg.align). In practice, its value may be: 0 (8-bit), 1 (16-bit), 2 (32-bit), or (64-bit; used only with wasm64).

`effective-address = address-operand + memarg.offset `

If memarg.align is incorrect it is considered "misaligned". Misaligned access still has the same behavior as aligned access, only possibly much slower.

m
### Followed by:

*m* : memarg { u32 offset, u32 align }
### Stack:

[i32] → [i64]

load 8 bytes as i64.

The static address offset is added to the dynamic address operand, yielding a 33 bit effective address that is the zero-based index at which the memory is accessed. All values are read and written in little endian byte order. A trap results if any of the accessed memory bytes lies outside the address range implied by the memory’s current size.

m
### Stack:

[i32] → [f32]

load 4 bytes as f32.

Note: When a number is stored into memory, it is converted into a sequence of bytes in little endian byte order.

m
### Stack:

[i32] → [f64]

load 8 bytes as f64.

m
### Stack:

[i32] → [i32]

load 1 byte and sign-extend i8 to i32.

Integer loads and stores can optionally specify a storage size that is smaller than the bit width of the respective value type. In the case of loads, a sign extension mode sx (s|u) is then required to select appropriate behavior.

m
### Stack:

[i32] → [i32]

load 1 byte and zero-extend i8 to i32

m
### Stack:

[i32] → [i32]

load 2 bytes and sign-extend i16 to i32

m
### Stack:

[i32] → [i32]

load 2 bytes and zero-extend i16 to i32

m
### Stack:

[i32] → [i64]

load 1 byte and sign-extend i8 to i64

m
### Stack:

[i32] → [i64]

load 1 byte and zero-extend i8 to i64

m
### Stack:

[i32] → [i64]

load 2 bytes and sign-extend i16 to i64

m
### Stack:

[i32] → [i64]

load 2 bytes and zero-extend i16 to i64

m
### Stack:

[i32] → [i64]

load 4 bytes and sign-extend i16 to i64

m
### Stack:

[i32] → [i64]

load 4 bytes and zero-extend i16 to i64

m
### Stack:

[i32 i32] → []

store 4 bytes (no conversion)

m
### Stack:

[i32 i64] → []

store 8 bytes (no conversion)

m
### Stack:

[i32 f32] → []

store 4 bytes (no conversion)

m
### Stack:

[i32 f64] → []

store 8 bytes (no conversion)

m
### Stack:

[i32 i32] → []

wrap i32 to i8 and store 1 byte

m
### Stack:

[i32 i32] → []

wrap i32 to i16 and store 2 bytes

m
### Stack:

[i32 i64] → []

wrap i64 to i8 and store 1 byte

m
### Stack:

[i32 i64] → []

wrap i64 to i16 and store 2 bytes

m
### Stack:

[i32 i64] → []

wrap i64 to i32 and store 4 bytes

The **memory.size** instruction returns the current size of a memory.

Operates in units of page size. Each page is 65,536 bytes (64KB).

The memory.grow instruction grows memory by a given delta and returns the previous size, or −1 if enough memory cannot be allocated.

Operates in units of page size. Each page is 65,536 bytes (64KB).

n
### Followed by:

*n* : i32
### Stack:

[] → [i32]

Push a 32-bit integer value to the stack.

n
### Followed by:

*n* : i64
### Stack:

[] → [i64]

Push a 64-bit integer value to the stack.

z
### Followed by:

*z* : f32
### Stack:

[] → [f32]

Push a 32-bit float value to the stack.

Push a 64-bit float value to the stack.

zcompare equal to zero.

Return 1 if operand is zero, 0 otherwise.

compare equal to zero.

Return 1 if operand is zero, 0 otherwise.

==

sign-agnostic compare equal

==

sign-agnostic compare equal

≠

sign-agnostic compare unequal

≠

sign-agnostic compare unequal

<

signed less than

<

signed less than

<

unsigned less than

<

unsigned less than

>

signed greater than

>

signed greater than

>

unsigned greater than

>

unsigned greater than

≤

signed less than or equal

≤

signed less than or equal

≤

unsigned less than or equal

≤

unsigned less than or equal

≥

signed greater than or equal

≥

signed greater than or equal

≥

unsigned greater than or equal

≥

unsigned greater than or equal

==

compare equal

==

compare equal

≠

compare unordered or unequal

≠

compare unordered or unequal

<

less than

<

less than

>

greater than

>

greater than

≤

less than or equal

≤

less than or equal

≥

greater than or equal

≥

greater than or equal

sign-agnostic count leading zero bits

Return the count of leading zero bits in i. All zero bits are considered leading if the value is zero.

sign-agnostic count leading zero bits

Return the count of leading zero bits in i. All zero bits are considered leading if the value is zero.

sign-agnostic count trailing zero bits

Return the count of trailing zero bits in i. All zero bits are considered trailing if the value is zero.

sign-agnostic count trailing zero bits

Return the count of trailing zero bits in i. All zero bits are considered trailing if the value is zero.

sign-agnostic count number of one bits

Return the count of non-zero bits in i.

sign-agnostic count number of one bits

Return the count of non-zero bits in i.

sign-agnostic addition

sign-agnostic addition

sign-agnostic subtraction

sign-agnostic subtraction

sign-agnostic multiplication, modulo 2^{32}

sign-agnostic multiplication, modulo 2^{64}

signed division (result is truncated toward zero)

signed division (result is truncated toward zero)

unsigned division (result is floored)

unsigned division (result is floored)

signed remainder (result has the sign of the dividend)

signed remainder (result has the sign of the dividend)

unsigned remainder

unsigned remainder

sign-agnostic bitwise *and*.

Return the bitwise conjunction of 𝑖1 and 𝑖2.

sign-agnostic bitwise *and*.

Return the bitwise conjunction of 𝑖1 and 𝑖2.

sign-agnostic bitwise *inclusive or*.

Return the bitwise disjunction of 𝑖1 and 𝑖2.

sign-agnostic bitwise *inclusive or*.

Return the bitwise disjunction of 𝑖1 and 𝑖2.

sign-agnostic bitwise *exclusive or*.

Return the bitwise exclusive disjunction of 𝑖1 and 𝑖2.

sign-agnostic bitwise *exclusive or*.

Return the bitwise exclusive disjunction of 𝑖1 and 𝑖2.

sign-agnostic shift left

Return the result of shifting i1 left by k bits, modulo 2^{32}

sign-agnostic shift left

Return the result of shifting i1 left by k bits, modulo 2^{64}

sign-replicating (arithmetic) shift right

Return the result of shifting i1 right by k bits, extended with the most significant bit of the original value.

sign-replicating (arithmetic) shift right

Return the result of shifting i1 right by k bits, extended with the most significant bit of the original value.

zero-replicating (logical) shift right

Return the result of shifting i1 right by k bits, extended with 0 bits.

zero-replicating (logical) shift right

Return the result of shifting i1 right by k bits, extended with 0 bits.

sign-agnostic rotate left

Return the result of rotating i1 left by k bits.

sign-agnostic rotate left

Return the result of rotating i1 left by k bits.

sign-agnostic rotate right

Return the result of rotating i1 right by k bits.

sign-agnostic rotate right

Return the result of rotating i1 right by k bits.

absolute value

absolute value

negation

negation

ceiling operator

ceiling operator

floor operator

floor operator

round to nearest integer towards zero

round to nearest integer towards zero

round to nearest integer, ties to even

round to nearest integer, ties to even

square root

square root

addition

addition

subtraction

subtraction

multiplication

multiplication

division

partial function: division by 0 is undefined

division

partial function: division by 0 is undefined

minimum (binary operator); if either operand is NaN, returns NaN

minimum (binary operator); if either operand is NaN, returns NaN

maximum (binary operator); if either operand is NaN, returns NaN

maximum (binary operator); if either operand is NaN, returns NaN

If z1 and z2 have the same sign, then return z1. Else return z1 with negated sign.

If z1 and z2 have the same sign, then return z1. Else return z1 with negated sign.

wrap a 64-bit integer to a 32-bit integer.

Return i modulo 2^{32}.

truncate a 32-bit float to a signed 32-bit integer

truncate a 32-bit float to an unsigned 32-bit integer

truncate a 64-bit float to a signed 32-bit integer

truncate a 64-bit float to an unsigned 32-bit integer

extend a signed 32-bit integer to a 64-bit integer.

extend an unsigned 32-bit integer to a 64-bit integer.

truncate a 32-bit float to a signed 64-bit integer.

truncate a 32-bit float to an unsigned 64-bit integer.

truncate a 64-bit float to a signed 64-bit integer.

truncate a 64-bit float to an unsigned 64-bit integer.

convert a signed 32-bit integer to a 32-bit float.

convert an unsigned 32-bit integer to a 32-bit float.

convert a signed 64-bit integer to a 32-bit float.

convert an unsigned 64-bit integer to a 32-bit float.

demote a 64-bit float to a 32-bit float

convert a signed 32-bit integer to a 64-bit float.

convert an unsigned 32-bit integer to a 64-bit float.

convert a signed 64-bit integer to a 64-bit float.

convert an unsigned 64-bit integer to a 64-bit float.

promote a 32-bit float to a 64-bit float

reinterpret the bits of a 32-bit float as a 32-bit integer

reinterpret the bits of a 64-bit float as a 64-bit integer

reinterpret the bits of a 32-bit integer as a 32-bit float

reinterpret the bits of a 64-bit integer as a 64-bit float

extend a signed 8-bit integer to a 32-bit integer

*Sign-extension operators extension*

extend a signed 16-bit integer to a 32-bit integer

*Sign-extension operators extension*

extend a signed 8-bit integer to a 64-bit integer

*Sign-extension operators extension*

extend a signed 16-bit integer to a 64-bit integer

*Sign-extension operators extension*

extend a signed 32-bit integer to a 64-bit integer

*Sign-extension operators extension*

evaluates to the null reference constant

*Reference Types Proposal*

checks for null

*Reference Types Proposal*

creates a reference to a given function

*Reference Types Proposal*

converts a nullable reference to a non-nullable one or traps if null

*Typed Function References Proposal*

converts a nullable reference to a non-nullable one or branches if null

*Typed Function References Proposal*

[eqref eqref] -> [i32]

*Reference Types Proposal*

checks for null and branches if present

*Typed Function References Proposal, GC Proposal*

Multibyte instructions beginning with the prefix 0xFB.

- Proposal to add garbage collection (GC) support.
- Proposal to add reference-typed strings.

See table below for full opcodes.

Multibyte instructions beginning with the prefix 0xFC.

Includes opcodes from the following extensions:

- Non-trapping float-to-int conversion
- Bulk Memory Operations
- Reference Types Proposal

See table below for full opcodes.

Multibyte instructions beginning with the prefix 0xFD.

SIMD opcodes (Vector instructions).

The opcode which follows the prefix uses variable-length integer encoding (LEB128)

See table below for full opcodes.

Multibyte instructions beginning with the prefix 0xFE.

Threads Proposal for WebAssembly.

See table below for full opcodes.

allocates a structure with canonical RTT (runtime type) and initialises its fields with given values

*Garbage Collection Proposal*

allocates a structure of type $t with canonical RTT (runtime type) and initialises its fields with default values

*Garbage Collection Proposal*

reads field `i`

from a structure

*Garbage Collection Proposal*

writes field `i`

of a structure

*Garbage Collection Proposal*

allocates an array with canonical RTT (runtime type)

*Garbage Collection Proposal*

allocates an array with canonical RTT (runtime type) and initialises its fields with the default value

*Garbage Collection Proposal*

reads an element from an array

*Garbage Collection Proposal*

writes an element to an array

*Garbage Collection Proposal*

inquires the length of an array

*Garbage Collection Proposal*

allocates an array with canonical RTT (runtime type) of fixed size and initialises it from operands

*Garbage Collection Proposal*

allocates an array with canonical RTT (runtime type) and initialises it from a data segment

*Garbage Collection Proposal*

allocates an array with canonical RTT (runtime type) and initialises it from an element segment

*Garbage Collection Proposal*

creates an i31ref from a 32 bit value, truncating high bit

*Garbage Collection Proposal*

extracts the value, sign-extending

*Garbage Collection Proposal*

extracts the value, zero-extending

*Garbage Collection Proposal*

checks whether a reference has a given heap type

*Garbage Collection Proposal*

checks whether a reference has a given heap type

*Garbage Collection Proposal*

tries to convert to a given heap type

*Garbage Collection Proposal*

branches if a reference has a given heap type

*Garbage Collection Proposal*

branches if a reference does not have a given heap type

*Garbage Collection Proposal*

checks whether a reference has a given heap type

*Garbage Collection Proposal*

tries to convert to a given heap type

*Garbage Collection Proposal*

branches if a reference has a given heap type

*Garbage Collection Proposal*

branches if a reference does not have a given heap type

*Garbage Collection Proposal*

converts an external value into the internal representation

*Garbage Collection Proposal*

converts an internal value into the external representation

*Garbage Collection Proposal*

saturating form of i32.trunc_f32_s

*Non-trapping float-to-int Conversion Proposal*

saturating form of i32.trunc_f32_u

*Non-trapping float-to-int Conversion Proposal*

saturating form of i32.trunc_f64_s

*Non-trapping float-to-int Conversion Proposal*

saturating form of i32.trunc_f64_u

*Non-trapping float-to-int Conversion Proposal*

saturating form of i64.trunc_f32_s

*Non-trapping float-to-int Conversion Proposal*

saturating form of i64.trunc_f32_u

*Non-trapping float-to-int Conversion Proposal*

saturating form of i64.trunc_f64_s

*Non-trapping float-to-int Conversion Proposal*

saturating form of i64.trunc_f64_u

*Non-trapping float-to-int Conversion Proposal*

copy from a passive data segment to linear memory

*Bulk Memory Operations*

prevent further use of passive data segment

*Bulk Memory Operations*

manipulate the size of a table

*Reference Types Proposal*

manipulate the size of a table

*Reference Types Proposal*

fills a range in a table with a value

*Reference Types Proposal*

copy from one region of linear memory to another region

*Bulk Memory Operations*

fill a region of linear memory with a given byte value

*Bulk Memory Operations*

copy from a passive element segment to a table

*Bulk Memory Operations*

prevent further use of a passive element segment

*Bulk Memory Operations*

copy from one region of a table to another region

*Bulk Memory Operations*

Calculates the absolute value of each lane of a 128-bit vector interpreted as four 32-bit floating point numbers.

Lane-wise addition of two 128-bit vectors interpreted as four 32-bit floating point numbers.

Lane-wise rounding to the nearest integral value not smaller than the input.

Converts a 128-bit vector interpreted as four 32-bit unsigned integers into a 128-bit vector of four 32-bit floating point numbers.

Converts a 128-bit vector interpreted as four 32-bit signed integers into a 128-bit vector of four 32-bit floating point numbers.

Conversion of the two double-precision floating point lanes to two lower single-precision lanes of the result. The two higher lanes of the result are initialized to zero. If the conversion result is not representable as a single-precision floating point number, it is rounded to the nearest-even representable number.

Lane-wise division of two 128-bit vectors interpreted as four 32-bit floating point numbers.

Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit floating point numbers.

Returns a new vector where each lane is all ones if the corresponding input elements were equal, or all zeros otherwise.

Extracts a lane from a 128-bit vector interpreted as 4 packed f32 numbers.

Extracts the scalar value of lane specified fn the immediate mode operand N from a. If N is out of bounds then it is a compile time error.

Lane-wise rounding to the nearest integral value not greater than the input.

Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit floating point numbers.

Returns a new vector where each lane is all ones if the lane-wise left element is greater than [or equal] the right element, or all zeros otherwise.

Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit floating point numbers.

Returns a new vector where each lane is all ones if the lane-wise left element is greater than the right element, or all zeros otherwise.

Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit floating point numbers.

Returns a new vector where each lane is all ones if the lane-wise left element is less than [or equal] the right element, or all zeros otherwise.

Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit floating point numbers.

Returns a new vector where each lane is all ones if the lane-wise left element is less than the right element, or all zeros otherwise.

Calculates the lane-wise [maximum] of two 128-bit vectors interpreted as four 32-bit floating point numbers.

Calculates the lane-wise minimum of two 128-bit vectors interpreted as four 32-bit floating point numbers.

Lane-wise multiplication of two 128-bit vectors interpreted as four 32-bit floating point numbers.

Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit floating point numbers.

Returns a new vector where each lane is all ones if the corresponding input elements were not equal, or all zeros otherwise.

Lane-wise rounding to the nearest integral value; if two values are equally near, rounds to the even one.

Negates each lane of a 128-bit vector interpreted as four 32-bit floating point numbers.

Lane-wise maximum value, defined as a < b ? b : a

Lane-wise minimum value, defined as b < a ? b : a

Replaces a lane from a 128-bit vector interpreted as 4 packed f32 numbers.

Rust: `fn f32x4_replace_lane`

Replaces the scalar value of lane specified fn the immediate mode operand `N`

from `a`

. If `N`

is out of bounds then it is a compile time error.

Creates a vector with identical lanes.

Constructs a vector with x replicated to all 4 lanes.

Calculates the square root of each lane of a 128-bit vector interpreted as four 32-bit floating point numbers.

Lane-wise subtraction of two 128-bit vectors interpreted as four 32-bit floating point numbers.

Lane-wise rounding to the nearest integral value with the magnitude not larger than the input.

Calculates the absolute value of each lane of a 128-bit vector interpreted as two 64-bit floating point numbers.

Lane-wise add of two 128-bit vectors interpreted as two 64-bit floating point numbers.

Lane-wise rounding to the nearest integral value not smaller than the input.

Lane-wise conversion from signed integer to floating point.

Lane-wise conversion from unsigned integer to floating point.

Lane-wise divide of two 128-bit vectors interpreted as two 64-bit floating point numbers.

Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit floating point numbers.

Returns a new vector where each lane is all ones if the corresponding input elements were equal, or all zeros otherwise.

Extracts a lane from a 128-bit vector interpreted as 2 packed f64 numbers.

Extracts the scalar value of lane specified fn the immediate mode operand N from 'a'. If 'N' [is] out of bounds then it is a compile time error.

Lane-wise rounding to the nearest integral value not greater than the input.

Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit floating point numbers.

Returns a new vector where each lane is all ones if the lane-wise left element is greater than the right element, or all zeros otherwise.

Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit floating point numbers.

Returns a new vector where each lane is all ones if the lane-wise left element is greater than the right element, or all zeros otherwise.

Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit floating point numbers

Returns a new vector where each lane is all ones if the lane-wise left element is less than the right element, or all zeros otherwise.

Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit floating point numbers.

Returns a new vector where each lane is all ones if the lane-wise left element is less than the right element, or all zeros otherwise.

Calculates the lane-wise maximum of two 128-bit vectors interpreted as two 64-bit floating point numbers.

Calculates the lane-wise minimum of two 128-bit vectors interpreted as two 64-bit floating point numbers.

Lane-wise multiply of two 128-bit vectors interpreted as two 64-bit floating point numbers.

Compares two 128-bit vectors as if they were two vectors of 2 sixty-four-bit floating point numbers.

Returns a new vector where each lane is all ones if the corresponding input elements were not equal, or all zeros otherwise.

Negates each lane of a 128-bit vector interpreted as two 64-bit floating point numbers.

Lane-wise maximum value

Lane-wise minimum value

Conversion of the two lower single-precision floating point lanes to the two double-precision lanes of the result.

Replaces a lane from a 128-bit vector interpreted as 2 packed f64 numbers.

Replaces the scalar value of lane specified in the immediate mode operand N from 'a'. If N is out of bounds then it is a compile time error.

Creates a vector with identical lanes.

Constructs a vector with x replicated to all 2 lanes.

Calculates the square root of each lane of a 128-bit vector interpreted as two 64-bit floating point numbers.

Lane-wise subtract of two 128-bit vectors interpreted as two 64-bit floating point numbers.

Lane-wise rounding to the nearest integral value with the magnitude not larger than the input.

Lane-wise wrapping absolute value.

Adds two 128-bit vectors as if they were two packed sixteen 8-bit integers.

Adds two 128-bit vectors as if they were two packed sixteen 8-bit signed integers, saturating on overflow to i8::MAX.

Returns true if all lanes are non-zero, false otherwise.

Extracts the high bit for each lane in a and produce a scalar mask with all bits concatenated.

Compares two 128-bit vectors as if they were two vectors of 16 eight-bit integers.

Returns a new vector where each lane is all ones if the corresponding input elements were equal, or all zeros otherwise.

Rust: `fn i8x16_extract_lane<const N: usize>(a: v128) -> i8`

Extracts a lane from a 128-bit vector interpreted as 16 packed i8 numbers.

Extracts the scalar value of lane specified in the immediate mode operand `N`

from `a`

. If `N`

is out of bounds then it is a compile time error.

Compares two 128-bit vectors as if they were two vectors of 16 eight-bit signed integers.

Compares two 128-bit vectors as if they were two vectors of 16 eight-bit signed integers.

Compares two 128-bit vectors as if they were two vectors of 16 eight-bit signed integers.

Compares two 128-bit vectors as if they were two vectors of 16 eight-bit signed integers.

Compares lane-wise signed integers, and returns the maximum of each pair.

Compares lane-wise signed integers, and returns the minimum of each pair.

Converts two input vectors into a smaller lane vector by narrowing each lane.

Signed saturation to 0x7f or 0x80 is used and the input lanes are always interpreted as signed integers.

Compares two 128-bit vectors as if they were two vectors of 16 eight-bit integers.

Returns a new vector where each lane is all ones if the corresponding input elements were not equal, or all zeros otherwise.

Negates a 128-bit vectors interpreted as sixteen 8-bit signed integers

Count the number of bits set to one within each lane.

Replaces a lane from a 128-bit vector interpreted as 16 packed i8 numbers.

Replaces the scalar value of lane specified in the immediate mode operand N from 'a'. If N is out of bounds then it is a compile time error.

Shifts each lane to the left by the specified number of bits.

Only the low bits of the shift amount are used if the shift amount is greater than the lane width.

Shifts each lane to the right by the specified number of bits, sign extending.

Only the low bits of the shift amount are used if the shift amount is greater than the lane width.

Returns a new vector with lanes selected from the lanes of the two input vectors $a and $b specified in the 16 immediate operands.

Creates a vector with identical lanes.

Constructs a vector with x replicated to all 16 lanes.

Subtracts two 128-bit vectors as if they were two packed sixteen 8-bit integers.

Subtracts two 128-bit vectors as if they were two packed sixteen 8-bit signed integers, saturating on overflow to i8::MIN.

Returns a new vector with lanes selected from the lanes of the first input vector a specified in the second input vector s.

The indices i in range [0, 15] select the i-th element of 'a'. For indices outside of the range the resulting lane is 0.

Lane-wise wrapping absolute value.

Adds two 128-bit vectors as if they were two packed eight 16-bit integers.

Adds two 128-bit vectors as if they were two packed eight 16-bit signed integers, saturating on overflow to i16::MAX.

Returns true if all lanes are non-zero, false otherwise.

Extracts the high bit for each lane in a and produce a scalar mask with all bits concatenated.

Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit integers.

Integer extended pairwise addition producing extended results (twice wider results than the inputs).

Converts high half of the smaller lane vector to a larger lane vector, sign extended.

Converts low half of the smaller lane vector to a larger lane vector, sign extended.

Lane-wise integer extended multiplication producing twice wider result than the inputs.

Lane-wise integer extended multiplication producing twice wider result than the inputs.

Extracts a lane from a 128-bit vector interpreted as 8 packed i16 numbers.

Compares two 128-bit vectors as if they were two vectors of eight 16-bit signed integers.

Compares two 128-bit vectors as if they were two vectors of eight 16-bit signed integers.

Compares two 128-bit vectors as if they were two vectors of eight 16-bit signed integers.

Compares two 128-bit vectors as if they were two vectors of eight 16-bit signed integers.

Compares lane-wise signed integers, and returns the maximum of each pair.

Compares lane-wise signed integers, and returns the minimum of each pair.

Multiplies two 128-bit vectors as if they were two packed eight 16-bit integers.

Converts two input vectors into a smaller lane vector by narrowing each lane.

Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit integers.

Negates a 128-bit vectors interpreted as eight 16-bit signed integers

Rust: `fn i16x8_q15mulr_sat(a: v128, b: v128) -> v128`

Lane-wise saturating rounding multiplication in Q15 format.

Replaces a lane from a 128-bit vector interpreted as 8 packed i16 numbers.

Shifts each lane to the left by the specified number of bits.

Shifts each lane to the right by the specified number of bits, sign extending.

Same as i8x16_shuffle, except operates as if the inputs were eight 16-bit integers, only taking 8 indices to shuffle.

Indices in the range [0, 7] select from a while [8, 15] select from b. Note that this will generate the i8x16.shuffle instruction, since there is no native i16x8.shuffle instruction (there is no need for one since i8x16.shuffle suffices).

Creates a vector with identical lanes.

Subtracts two 128-bit vectors as if they were two packed eight 16-bit integers.

Subtracts two 128-bit vectors as if they were two packed eight 16-bit signed integers, saturating on overflow to i16::MIN.

Lane-wise wrapping absolute value.

Adds two 128-bit vectors as if they were two packed four 32-bit integers.

Returns true if all lanes are non-zero, false otherwise.

Extracts the high bit for each lane in a and produce a scalar mask with all bits concatenated.

Lane-wise multiply signed 16-bit integers in the two input vectors and add adjacent pairs of the full 32-bit results.

Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit integers.

Integer extended pairwise addition producing extended results (twice wider results than the inputs).

Converts high half of the smaller lane vector to a larger lane vector, sign extended.

Converts low half of the smaller lane vector to a larger lane vector, sign extended.

Lane-wise integer extended multiplication producing twice wider result than the inputs.

Lane-wise integer extended multiplication producing twice wider result than the inputs.

Extracts a lane from a 128-bit vector interpreted as 4 packed i32 numbers.

Compares two 128-bit vectors as if they were two vectors of four 32-bit signed integers.

Compares two 128-bit vectors as if they were two vectors of four 32-bit signed integers.

Compares two 128-bit vectors as if they were two vectors of four 32-bit signed integers.

Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit signed integers.

Compares lane-wise signed integers, and returns the maximum of each pair.

Compares lane-wise signed integers, and returns the minimum of each pair.

Multiplies two 128-bit vectors as if they were two packed four 32-bit signed integers.

Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit integers.

Negates a 128-bit vectors interpreted as four 32-bit signed integers

Replaces a lane from a 128-bit vector interpreted as 4 packed i32 numbers.

Shifts each lane to the left by the specified number of bits.

Shifts each lane to the right by the specified number of bits, sign extending.

Creates a vector with identical lanes.

Subtracts two 128-bit vectors as if they were two packed four 32-bit integers.

Converts a 128-bit vector interpreted as four 32-bit floating point numbers into a 128-bit vector of four 32-bit signed integers.

Saturating conversion of the two double-precision floating point lanes to two lower integer lanes using the IEEE convertToIntegerTowardZero function.

Lane-wise wrapping absolute value.

Adds two 128-bit vectors as if they were two packed two 64-bit integers.

Returns true if all lanes are non-zero, false otherwise.

Extracts the high bit for each lane in 'a' and produce a scalar mask with all bits concatenated.

Compares two 128-bit vectors as if they were two vectors of two 64-bit integers.

Converts high half of the smaller lane vector to a larger lane vector, sign extended.

Converts low half of the smaller lane vector to a larger lane vector, sign extended.

Lane-wise integer extended multiplication producing twice wider result than the inputs.

Lane-wise integer extended multiplication producing twice wider result than the inputs.

Extracts a lane from a 128-bit vector interpreted as 2 packed i64 numbers.

Compares two 128-bit vectors as if they were two vectors of two 64-bit signed integers.

Compares two 128-bit vectors as if they were two vectors of two 64-bit signed integers.

Compares two 128-bit vectors as if they were two vectors of two 64-bit signed integers.

Compares two 128-bit vectors as if they were two vectors of two 64-bit signed integers.

Multiplies two 128-bit vectors as if they were two packed two 64-bit integers.

Compares two 128-bit vectors as if they were two vectors of two 64-bit integers.

Negates a 128-bit vectors interpreted as two 64-bit signed integers

Replaces a lane from a 128-bit vector interpreted as 2 packed i64 numbers.

Shifts each lane to the left by the specified number of bits.

Shifts each lane to the right by the specified number of bits, sign extending.

Creates a vector with identical lanes.

Subtracts two 128-bit vectors as if they were two packed two 64-bit integers.

Adds two 128-bit vectors as if they were two packed sixteen 8-bit unsigned integers, saturating on overflow to u8::MAX.

Lane-wise rounding average.

Extracts a lane from a 128-bit vector interpreted as 16 packed u8 numbers.

Compares two 128-bit vectors as if they were two vectors of 16 eight-bit unsigned integers.

Compares two 128-bit vectors as if they were two vectors of 16 eight-bit unsigned integers.

Compares two 128-bit vectors as if they were two vectors of 16 eight-bit unsigned integers.

Compares two 128-bit vectors as if they were two vectors of 16 eight-bit unsigned integers.

Compares lane-wise unsigned integers, and returns the maximum of each pair.

Compares lane-wise unsigned integers, and returns the minimum of each pair.

Converts two input vectors into a smaller lane vector by narrowing each lane.

Shifts each lane to the right by the specified number of bits, shifting in zeros.

Subtracts two 128-bit vectors as if they were two packed sixteen 8-bit unsigned integers, saturating on overflow to 0.

Adds two 128-bit vectors as if they were two packed eight 16-bit unsigned integers, saturating on overflow to u16::MAX.

Lane-wise rounding average.

Integer extended pairwise addition producing extended results (twice wider results than the inputs).

Converts high half of the smaller lane vector to a larger lane vector, zero extended.

Converts low half of the smaller lane vector to a larger lane vector, zero extended.

Lane-wise integer extended multiplication producing twice wider result than the inputs.

Lane-wise integer extended multiplication producing twice wider result than the inputs.

Extracts a lane from a 128-bit vector interpreted as 8 packed u16 numbers.

Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit unsigned integers.

Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit unsigned integers.

Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit unsigned integers.

Compares two 128-bit vectors as if they were two vectors of 8 sixteen-bit unsigned integers.

Compares lane-wise unsigned integers, and returns the maximum of each pair.

Compares lane-wise unsigned integers, and returns the minimum of each pair.

Converts two input vectors into a smaller lane vector by narrowing each lane.

Shifts each lane to the right by the specified number of bits, shifting in zeros.

Subtracts two 128-bit vectors as if they were two packed eight 16-bit unsigned integers, saturating on overflow to 0.

Integer extended pairwise addition producing extended results (twice wider results than the inputs).

Converts high half of the smaller lane vector to a larger lane vector, zero extended.

Converts low half of the smaller lane vector to a larger lane vector, zero extended.

Lane-wise integer extended multiplication producing twice wider result than the inputs.

Lane-wise integer extended multiplication producing twice wider result than the inputs.

Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit unsigned integers.

Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit unsigned integers.

Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit unsigned integers.

Compares two 128-bit vectors as if they were two vectors of 4 thirty-two-bit unsigned integers.

Compares lane-wise unsigned integers, and returns the maximum of each pair.

Compares lane-wise unsigned integers, and returns the minimum of each pair.

Shifts each lane to the right by the specified number of bits, shifting in zeros.

Converts a 128-bit vector interpreted as four 32-bit floating point numbers into a 128-bit vector of four 32-bit unsigned integers.

Converts high half of the smaller lane vector to a larger lane vector, zero extended.

Converts low half of the smaller lane vector to a larger lane vector, zero extended.

Lane-wise integer extended multiplication producing twice wider result than the inputs.

Lane-wise integer extended multiplication producing twice wider result than the inputs.

Shifts each lane to the right by the specified number of bits, shifting in zeros.

Performs a bitwise and of the two input 128-bit vectors, returning the resulting vector.

Bitwise AND of bits of a and the logical inverse of bits of b.

Returns true if any bit in a is set, or false otherwise.

Rust: `fn v128_bitselect(v1: v128, v2: v128, c: v128) -> v128`

Use the bitmask in `c`

to select bits from `v1`

when 1 and `v2`

when 0.

Loads a v128 vector from the given heap address.

Loads an 8-bit value from m and sets lane L of v to that value.

Load a single element and splat to all lanes of a v128 vector.

Loads a 16-bit value from m and sets lane L of v to that value.

Load a single element and splat to all lanes of a v128 vector.

Loads a 32-bit value from m and sets lane L of v to that value.

Load a single element and splat to all lanes of a v128 vector.

Load a 32-bit element into the low bits of the vector and sets all other bits to zero.

Loads a 64-bit value from m and sets lane L of v to that value.

Load a single element and splat to all lanes of a v128 vector.

Load a 64-bit element into the low bits of the vector and sets all other bits to zero.

Flips each bit of the 128-bit input vector.

Performs a bitwise or of the two input 128-bit vectors, returning the resulting vector.

Stores a v128 vector to the given heap address.

Stores the 8-bit value from lane L of v into m

Stores the 16-bit value from lane L of v into m

Stores the 32-bit value from lane L of v into m

Stores the 64-bit value from lane L of v into m

Performs a bitwise xor of the two input 128-bit vectors, returning the resulting vector.

Opcode location during prototyping.

*Relaxed SIMD proposal*

Opcode location during prototyping.

*Relaxed SIMD proposal*

Opcode location during prototyping.

*Relaxed SIMD proposal*

Opcode location during prototyping.

*Relaxed SIMD proposal*

Opcode location during prototyping.

*Relaxed SIMD proposal*

Opcode location during prototyping.

*Relaxed SIMD proposal*

Opcode location during prototyping.

*Relaxed SIMD proposal*

Opcode location during prototyping.

*Relaxed SIMD proposal*

Opcode location during prototyping.

*Relaxed SIMD proposal*

Opcode location during prototyping.

*Relaxed SIMD proposal*

Opcode location during prototyping.

*Relaxed SIMD proposal*

Opcode location during prototyping.

*Relaxed SIMD proposal*

Opcode location during prototyping.

*Relaxed SIMD proposal*

Opcode location during prototyping.

*Relaxed SIMD proposal*

Opcode location during prototyping.

*Relaxed SIMD proposal*

Opcode location during prototyping.

*Relaxed SIMD proposal*

Opcode location during prototyping.

*Relaxed SIMD proposal*

`relaxed i8x16.swizzle(a, s)`

selects lanes from `a`

using indices in `s`

,
indices in the range `[0,15]`

will select the `i`

-th element of `a`

, the result
for any out of range indices is implementation-defined (i.e. if the index is `[16-255]`

.

`relaxed i32x4.trunc_f32x4_s`

(relaxed version of `i32x4.trunc_sat_f32x4_s`

)

This instruction has the same behavior as the non-relaxed instruction for lanes that are in the range of an `i32`

(signed or unsigned depending on the instruction). The result of lanes which contain NaN is implementation defined, either 0 or `INT32_MAX`

for signed and `UINT32_MAX`

for unsigned. The result of lanes which are out of bounds of `INT32`

or `UINT32`

is implementation defined, it can be either the saturated result or `INT32_MAX`

for signed and `UINT32_MAX`

for unsigned.

`relaxed i32x4.trunc_f32x4_u`

(relaxed version of `i32x4.trunc_sat_f32x4_u`

)

This instruction has the same behavior as the non-relaxed instruction for lanes that are in the range of an `i32`

(signed or unsigned depending on the instruction). The result of lanes which contain NaN is implementation defined, either 0 or `INT32_MAX`

for signed and `UINT32_MAX`

for unsigned. The result of lanes which are out of bounds of `INT32`

or `UINT32`

is implementation defined, it can be either the saturated result or `INT32_MAX`

for signed and `UINT32_MAX`

for unsigned.

`relaxed i32x4.trunc_f64x2_s_zero`

(relaxed version of `i32x4.trunc_sat_f64x2_s_zero`

)

This instruction has the same behavior as the non-relaxed instruction for lanes that are in the range of an `i32`

(signed or unsigned depending on the instruction). The result of lanes which contain NaN is implementation defined, either 0 or `INT32_MAX`

for signed and `UINT32_MAX`

for unsigned. The result of lanes which are out of bounds of `INT32`

or `UINT32`

is implementation defined, it can be either the saturated result or `INT32_MAX`

for signed and `UINT32_MAX`

for unsigned.

`relaxed i32x4.trunc_f64x2_u_zero`

(relaxed version of `i32x4.trunc_sat_f64x2_u_zero`

)

`i32`

(signed or unsigned depending on the instruction). The result of lanes which contain NaN is implementation defined, either 0 or `INT32_MAX`

for signed and `UINT32_MAX`

for unsigned. The result of lanes which are out of bounds of `INT32`

or `UINT32`

is implementation defined, it can be either the saturated result or `INT32_MAX`

for signed and `UINT32_MAX`

for unsigned.

Relaxed fused multiply-add

`relaxed f32x4.madd(a, b, c) = a * b + c`

where:

- the intermediate a * b is be rounded first, and the final result rounded again (for a total of 2 roundings), or
- the entire expression evaluated with higher precision and then only rounded once (if supported by hardware).

Relaxed fused negative multiply-add

`relaxed f32x4.nmadd(a, b, c) = -(a * b) + c`

where:

- the intermediate a * b is be rounded first, and the final result rounded again (for a total of 2 roundings), or
- the entire expression evaluated with higher precision and then only rounded once (if supported by hardware).

Relaxed fused multiply-add

`relaxed f64x2.madd(a, b, c) = a * b + c`

where:

- the intermediate a * b is be rounded first, and the final result rounded again (for a total of 2 roundings), or
- the entire expression evaluated with higher precision and then only rounded once (if supported by hardware).

Relaxed fused negative multiply-add

`relaxed f64x2.nmadd(a, b, c) = -(a * b) + c`

where:

`i8x16.laneselect(a: v128, b: v128, m: v128) -> v128`

Select lanes from `a`

or `b`

based on masks in `m`

. If each lane-sized mask in `m`

has all bits set or all bits unset, these instructions behave the same as `v128.bitselect`

. Otherwise, the result is implementation defined.

`i16x8.laneselect(a: v128, b: v128, m: v128) -> v128`

Select lanes from `a`

or `b`

based on masks in `m`

. If each lane-sized mask in `m`

has all bits set or all bits unset, these instructions behave the same as `v128.bitselect`

. Otherwise, the result is implementation defined.

`i32x4.laneselect(a: v128, b: v128, m: v128) -> v128`

Select lanes from `a`

or `b`

based on masks in `m`

. If each lane-sized mask in `m`

has all bits set or all bits unset, these instructions behave the same as `v128.bitselect`

. Otherwise, the result is implementation defined.

`i64x2.laneselect(a: v128, b: v128, m: v128) -> v128`

`a`

or `b`

based on masks in `m`

. If each lane-sized mask in `m`

has all bits set or all bits unset, these instructions behave the same as `v128.bitselect`

. Otherwise, the result is implementation defined.

Relaxed min

`f32x4.min(a: v128, b: v128) -> v128`

Return the lane-wise minimum of two values. If either values is NaN, or the values are -0.0 and +0.0, the return value is implementation-defined.

Relaxed max

`f32x4.max(a: v128, b: v128) -> v128`

Return the lane-wise maximum of two values. If either values is NaN, or the values are -0.0 and +0.0, the return value is implementation-defined.

Relaxed min

`f64x2.min(a: v128, b: v128) -> v128`

Return the lane-wise minimum of two values. If either values is NaN, or the values are -0.0 and +0.0, the return value is implementation-defined.

Relaxed max

`f64x2.max(a: v128, b: v128) -> v128`

Return the lane-wise maximum of two values. If either values is NaN, or the values are -0.0 and +0.0, the return value is implementation-defined.

Relaxed Rounding Q-format Multiplication

`i16x8.q15mulr_s(a: v128, b: v128) -> v128`

Returns the multiplication of 2 fixed-point numbers in Q15 format. If both inputs are `INT16_MIN`

, the result overflows, and the return value is implementation defined (either `INT16_MIN`

or `INT16_MAX`

).

Relaxed integer dot product

`i16x8.dot_i8x16_i7x16_s(a: v128, b: v128) -> v128`

Returns the multiplication of 8-bit elements (signed or unsigned) by 7-bit elements (unsigned) with accumulation of adjacent products. The `i32x4`

versions allows for accumulation into another vector.

When the second operand of the product has the high bit set in a lane, that lane's result is implementation defined.

Relaxed integer dot product

`i32x4.dot_i8x16_i7x16_add_s(a: v128, b:v128, c:v128) -> v128`

Returns the multiplication of 8-bit elements (signed or unsigned) by 7-bit elements (unsigned) with accumulation of adjacent products. The `i32x4`

versions allows for accumulation into another vector.

When the second operand of the product has the high bit set in a lane, that lane's result is implementation defined.

Relaxed BFloat16 dot product

`i32x4.dot_i8x16_i7x16_add_s(a: v128, b:v128, c:v128) -> v128`

BFloat16 is a 16-bit floating-point format that represents the IEEE FP32 numbers truncated to the high 16 bits. This instruction computes a FP32 dot product of 2 BFloat16 with accumulation into another FP32.

atomically load 1 byte and zero-extend i8 to i32

Atomic load/store memory accesses behave like their non-atomic counterparts, with the exception that the ordering of accesses is sequentially consistent.

atomically load 2 bytes and zero-extend i16 to i32

Atomic load/store memory accesses behave like their non-atomic counterparts, with the exception that the ordering of accesses is sequentially consistent.

atomically load 4 bytes as i32

Atomic load/store memory accesses behave like their non-atomic counterparts, with the exception that the ordering of accesses is sequentially consistent.

atomically load 1 byte and zero-extend i8 to i64

atomically load 2 bytes and zero-extend i16 to i64

atomically load 4 bytes and zero-extend i32 to i64

atomically load 8 bytes as i64

wrap i32 to i8 and atomically store 1 byte

wrap i32 to i16 and atomically store 2 bytes

(no conversion) atomically store 4 bytes

wrap i64 to i8 and atomically store 1 byte

wrap i64 to i16 and atomically store 2 bytes

wrap i64 to i32 and atomically store 4 bytes

(no conversion) atomically store 8 bytes

8-bit sign-agnostic addition

Read: 1 byte, Write: 1 byte

Returns: zero-extended i8 to i32

Atomic read-modify-write (RMW) operators atomically read a value from an address, modify the value, and store the resulting value to the same address. All RMW operators return the value read from memory before the modify operation was performed.

The RMW operators have two operands, an address and a value used in the modify operation.

16-bit sign-agnostic addition

Read: 2 bytes, Write: 2 bytes

Returns: zero-extended i16 to i32

Atomic read-modify-write (RMW) operators atomically read a value from an address, modify the value, and store the resulting value to the same address. All RMW operators return the value read from memory before the modify operation was performed.

The RMW operators have two operands, an address and a value used in the modify operation.

32-bit sign-agnostic addition

Read: 4 bytes, Write: 4 bytes

Returns: as i32

Atomic read-modify-write (RMW) operators atomically read a value from an address, modify the value, and store the resulting value to the same address. All RMW operators return the value read from memory before the modify operation was performed.

The RMW operators have two operands, an address and a value used in the modify operation.

8-bit sign-agnostic addition

Returns: zero-extended i8 to i64

The RMW operators have two operands, an address and a value used in the modify operation.

64-bit sign-agnostic addition

Read: 4 bytes, Write: 4 bytes

Returns: zero-extended i16 to i64

The RMW operators have two operands, an address and a value used in the modify operation.

32-bit sign-agnostic addition

Returns: zero-extended i32 to i64

The RMW operators have two operands, an address and a value used in the modify operation.

64-bit sign-agnostic addition

Read: 8 bytes, Write: 8 bytes

Returns: as i64

The RMW operators have two operands, an address and a value used in the modify operation.

8-bit sign-agnostic subtraction

Read: 1 byte, Write: 1 byte

Returns: zero-extended i8 to i32

The RMW operators have two operands, an address and a value used in the modify operation.

16-bit sign-agnostic subtraction

Read: 2 bytes, Write: 2 bytes

Returns: zero-extended i16 to i32

The RMW operators have two operands, an address and a value used in the modify operation.

32-bit sign-agnostic subtraction

Read: 4 bytes, Write: 4 bytes

Returns: as i32

The RMW operators have two operands, an address and a value used in the modify operation.

8-bit sign-agnostic subtraction

Read: 1 byte, Write: 1 byte

Returns: zero-extended i8 to i64

The RMW operators have two operands, an address and a value used in the modify operation.

16-bit sign-agnostic subtraction

Read: 2 bytes, Write: 2 bytes

Returns: zero-extended i16 to i64

The RMW operators have two operands, an address and a value used in the modify operation.

32-bit sign-agnostic subtraction

Read: 4 bytes, Write: 4 bytes

Returns: zero-extended i32 to i64

The RMW operators have two operands, an address and a value used in the modify operation.

64-bit sign-agnostic subtraction

Read: 8 bytes, Write: 8 bytes

Returns: as i64

The RMW operators have two operands, an address and a value used in the modify operation.

8-bit sign-agnostic bitwise and

Read: 1 byte, Write: 1 byte

Returns: zero-extended i8 to i32

The RMW operators have two operands, an address and a value used in the modify operation.

16-bit sign-agnostic bitwise and

Read: 2 bytes, Write: 2 bytes

Returns: zero-extended i16 to i32

The RMW operators have two operands, an address and a value used in the modify operation.

32-bit sign-agnostic bitwise and

Read: 4 bytes, Write: 4 bytes

Returns: as i32

The RMW operators have two operands, an address and a value used in the modify operation.

8-bit sign-agnostic bitwise and

Read: 1 byte, Write: 1 byte

Returns: zero-extended i8 to i64

The RMW operators have two operands, an address and a value used in the modify operation.

16-bit sign-agnostic bitwise and

Read: 2 bytes, Write: 2 bytes

Returns: zero-extended i16 to i64

The RMW operators have two operands, an address and a value used in the modify operation.

32-bit sign-agnostic bitwise and

Read: 4 bytes, Write: 4 bytes

Returns: zero-extended i32 to i64

The RMW operators have two operands, an address and a value used in the modify operation.

64-bit sign-agnostic bitwise and

Read: 8 bytes, Write: 8 bytes

Returns: as i64

The RMW operators have two operands, an address and a value used in the modify operation.

8-bit sign-agnostic bitwise inclusive or

Read: 1 byte, Write: 1 byte

Returns: zero-extended i8 to i32

The RMW operators have two operands, an address and a value used in the modify operation.

16-bit sign-agnostic bitwise inclusive or

Read: 2 bytes, Write: 2 bytes

Returns: zero-extended i16 to i32

The RMW operators have two operands, an address and a value used in the modify operation.

32-bit sign-agnostic bitwise inclusive or

Read: 4 bytes, Write: 4 bytes

Returns: as i32

The RMW operators have two operands, an address and a value used in the modify operation.

8-bit sign-agnostic bitwise inclusive or

Read: 1 byte, Write: 1 byte

Returns: zero-extended i8 to i64

The RMW operators have two operands, an address and a value used in the modify operation.

16-bit sign-agnostic bitwise inclusive or

Read: 2 bytes, Write: 2 bytes

Returns: zero-extended i16 to i64

The RMW operators have two operands, an address and a value used in the modify operation.

32-bit sign-agnostic bitwise inclusive or

Read: 4 bytes, Write: 4 bytes

Returns: zero-extended i32 to i64

The RMW operators have two operands, an address and a value used in the modify operation.

64-bit sign-agnostic bitwise inclusive or

Read: 8 bytes, Write: 8 bytes

Returns: as i64

The RMW operators have two operands, an address and a value used in the modify operation.

8-bit sign-agnostic bitwise exclusive or

Read: 1 byte, Write: 1 byte

Returns: zero-extended i8 to i32

The RMW operators have two operands, an address and a value used in the modify operation.

16-bit sign-agnostic bitwise exclusive or

Read: 2 bytes, Write: 2 bytes

Returns: zero-extended i16 to i32

The RMW operators have two operands, an address and a value used in the modify operation.

32-bit sign-agnostic bitwise exclusive or

Read: 4 bytes, Write: 4 bytes

Returns: as i32

The RMW operators have two operands, an address and a value used in the modify operation.

8-bit sign-agnostic bitwise exclusive or

Read: 1 byte, Write: 1 byte

Returns: zero-extended i8 to i64

The RMW operators have two operands, an address and a value used in the modify operation.

16-bit sign-agnostic bitwise exclusive or

Read: 2 bytes, Write: 2 bytes

Returns: zero-extended i16 to i64

The RMW operators have two operands, an address and a value used in the modify operation.

32-bit sign-agnostic bitwise exclusive or

Read: 4 bytes, Write: 4 bytes

Returns: zero-extended i32 to i64

The RMW operators have two operands, an address and a value used in the modify operation.

64-bit sign-agnostic bitwise exclusive or

Read: 8 bytes, Write: 8 bytes

Returns: as i64

The RMW operators have two operands, an address and a value used in the modify operation.

nop

Read: 1 byte, Write: 1 byte

Returns: zero-extended i8 to i32

The RMW operators have two operands, an address and a value used in the modify operation.

nop

Read: 2 bytes, Write: 2 bytes

Returns: zero-extended i16 to i32

The RMW operators have two operands, an address and a value used in the modify operation.

nop

Read: 4 bytes, Write: 4 bytes

Returns: as i32

The RMW operators have two operands, an address and a value used in the modify operation.

nop

Read: 1 byte, Write: 1 byte

Returns: zero-extended i8 to i64

The RMW operators have two operands, an address and a value used in the modify operation.

nop

Read: 2 bytes, Write: 2 bytes

Returns: zero-extended i16 to i64

The RMW operators have two operands, an address and a value used in the modify operation.

nop

Read: 4 bytes, Write: 4 bytes

Returns: zero-extended i32 to i64

The RMW operators have two operands, an address and a value used in the modify operation.

nop

Read: 8 bytes, Write: 8 bytes

Returns: as i64

The RMW operators have two operands, an address and a value used in the modify operation.

Load as loaded: 1 byte

Compare expected with loaded: `expected` wrapped from i32 to i8, 8-bit compare equal

Conditionally Store replacement: wrapped from i32 to i8, store 1 byte

Return loaded: zero-extended from i8 to i32

Load as loaded: 2 bytes

Compare expected with loaded: `expected` wrapped from i32 to i16, 16-bit compare equal

Conditionally Store replacement: wrapped from i32 to i16, store 2 bytes

Return loaded: zero-extended from i8 to i32

Load as loaded: 4 bytes

Compare expected with loaded: 32-bit compare equal

Conditionally Store replacement: store 4 bytes

Return loaded: as i32

Load as loaded: 1 byte

Compare expected with loaded: `expected` wrapped from i64 to i8, 8-bit compare equal

Conditionally Store replacement: wrapped from i64 to i8, store 1 byte

Return loaded: zero-extended from i8 to i64

Load as loaded: 2 bytes

Compare expected with loaded: `expected` wrapped from i64 to i16, 16-bit compare equal

Conditionally Store replacement: wrapped from i64 to i16, store 2 bytes

Return loaded: zero-extended from i16 to i64

Load as loaded:4 bytes

Compare expected with loaded: `expected` wrapped from i64 to i32, 32-bit compare equal

Conditionally Store replacement: wrapped from i64 to i32, store 4 bytes

Return loaded: zero-extended from i32 to i64

Load as loaded: 8 bytes

Compare expected with loaded: 64-bit compare equal

Conditionally Store replacement: 8 bytes

Return loaded: as i64

- WebAssembly Specifications
- Instructions (opcodes)
- WebAssembly — MDN (developer.mozilla.org)
- Instruction reference (opcodes)
- Introduction to WebAssembly — Rasmus Andersson (rsms.me)

- WebAssembly Roadmap — implemented features in popular browsers and engines
- WebAssembly proposals (github)
- Bulk Memory Operations and Conditional Segment Initialization
- Reference Types Proposal
- Exception Handling Proposal
- Non-trapping float-to-int Conversion Proposal
- Reference-Typed Strings Proposal
- Stack Switching Proposal / Typed continuations — proposes several new instructions: cont.new, cont.bind, resume, resume_throw
- Call Tags Proposal — proposes new instructions: call_with_tag, call_tag.new, call_tag.canon
- Memory control proposal — proposes new instructions: memory.map, memory.unmap, memory.protect, memory.discard
- Constant-Time Extension Proposal – proposes new instructions for secret types: iNN.declassify, sNN.classify, sNN.load, sNN.store, ...
- Rust Module core::arch::wasm — SIMD documentation

WebAssembly Opcodes by Pengo Wray

Contributors: nokotan (かめのこにょこにょこ)