Base Address Register

INTRODUCTION TO THE ARM INSTRUCTION SET

ANDREW N. SLOSS , ... CHRIS WRIGHT , in ARM System Developer'southward Guide, 2004

3.iii.2 Single-REGISTER LOAD-STORE ADDRESSING MODES

The ARM instruction gear up provides dissimilar modes for addressing retention. These modes incorporate one of the indexing methods: preindex with writeback, preindex, and postindex (see Table 3.4).

Tabular array 3.four. Index methods.

Index method Data Base of operations address register Example
Preindex with writeback mem[base + offset] base + offset LDR r0,[r1,#4]!
Preindex mem[base + commencement] not updated LDR r0,[r1,#4]
Postindex mem[base] base + offset LDR r0,[r1],#4

Notation: ! indicates that the didactics writes the calculated address back to the base accost register.

EXAMPLE 3.16

Preindex with writeback calculates an address from a base register plus address beginning and so updates that address base register with the new accost. In contrast, the preindex offset is the same as the preindex with writeback but does non update the accost base register. Postindex only updates the address base register afterward the address is used. The preindex fashion is useful for accessing an element in a data construction. The postindex and preindex with writeback modes are useful for traversing an array.

Preindexing with writeback:

Preindexing:

Postindexing:

Instance 3.15 used a preindex method. This example shows how each indexing method effects the address held in annals r1, besides as the data loaded into register r0. Each instruction shows the event of the index method with the aforementioned pre-condition.

The addressing modes available with a particular load or store instruction depend on the pedagogy class. Table 3.5 shows the addressing modes available for load and store of a 32-bit discussion or an unsigned byte.

A signed offset or register is denoted by "+/−", identifying that it is either a positive or negative first from the base address register Rn. The base accost register is a pointer to a byte in retentivity, and the showtime specifies a number of bytes.

Immediate means the address is calculated using the base address annals and a 12-bit offset encoded in the instruction. Register means the address is calculated using the base address register and a specific register'due south contents. Scaled means the address is calculated using the base address register and a butt shift operation.

Table three.6 provides an example of the different variations of the LDR education. Table 3.vii shows the addressing modes available on load and store instructions using xvi-scrap halfword or signed byte data.

Tabular array 3.6. Examples of LDR instructions using different addressing modes.

Instruction r0 = r1 + =
Preindex with writeback LDR r0,[r1,#0×four]! mem32[r1 + 0×4] 0×iv
LDR r0,[r1,r2]! mem32[r1+r2] r2
LDR r0,[r1,r2,LSR#0×4]! mem32[r1 + (r2 LSR 0×four)] (r2 LSR 0×four)
Preindex LDR r0,[r1,#0×four] mem32[r1 + 0×4] not updated
LDR r0,[r1,r2] mem32[r1 + r2] not updated
LDR r0,[r1,-r2,LSR #0×4] mem32[r1-(r2 LSR 0×4)] not updated
Postindex LDR r0,[r1],#0×4 mem32[r1] 0×4
LDR r0,[r1],r2 mem32[r1] r2
LDR r0,[r1],r2,LSR #0×4 mem32[r1] (r2 LSR 0×4)

These operations cannot utilize the barrel shifter. In that location are no STRSB or STRSH instructions since STRH stores both a signed and unsigned halfword; similarly STRB stores signed and unsigned bytes. Table iii.8 shows the variations for STRH instructions.

Tabular array 3.8. Variations of STRH instructions.

Didactics Result r1 + =
Preindex with writeback STRH r0,[r1,#0×iv]! mem16[r1+0×four]=r0 0×4
STRH r0,[r1,r2]! mem16[r1+r2]=r0 r2
Preindex STRH r0,[r1,#0×4] mem16[r1+0×four]=r0 non updated
STRH r0,[r1,r2] mem16[r1+r2]=r0 not updated
Postindex STRH r0,[r1],#0×4 mem16[r1]=r0 0×iv
STRH r0,[r1],r2 mem16[r1]=r0 r2

Read full affiliate

URL:

https://world wide web.sciencedirect.com/scientific discipline/article/pii/B9781558608740500046

Architecture

Sarah L. Harris , David Harris , in Digital Design and Computer Architecture, 2022

Exception Handlers

Exception handlers use iv special-purpose registers, called control and status registers (CSRs), to handle an exception: mtvec, mcause, mepc, and mscratch. The automobile trap-vector base-address register, mtvec, holds the address of the exception handler lawmaking. When an exception occurs, the processor records the cause of an exception in mcause (come across Table half-dozen.6), stores the PC of the excepting instruction in mepc, the car exception PC register, and jumps to the exception handler at the address preconfigured in mtvec.

Table six.6. Common exception crusade encodings

Interrupt Exception Lawmaking Description
1 iii Car software interrupt
1 7 Machine timer interrupt
ane 11 Machine external interrupt
0 0 Pedagogy address misaligned
0 2 Illegal education
0 3 Breakpoint
0 4 Load address misaligned
0 5 Load admission fault
0 6 Store address misaligned
0 7 Store access fault
0 viii Surround telephone call from U-Mode
0 nine Surroundings call from S-Mode
0 xi Environment call from G-Mode

The value of mcause tin be classified as either an interrupt or an exception, every bit indicated by the left-near column in Table 6.6, which is bit 31 of mcause. $.25 [thirty:0] of mcause concord the exception lawmaking, that indicates the cause of the interrupt or exception.

Exceptions can use one of two exception handling modes: direct or vectored. RISC-V typically uses the direct mode described here, where all exceptions branch to same address, that is, the base of operations address encoded in $.25 31:ii of mtvec. In vectored mode, exceptions branch to an offset from the base of operations address, depending on the cause of the exception. Each offset is separated by a minor number of addresses—for case, 32 bytes—so the exception handler code may demand to jump to a larger exception handler to bargain with the exception. The exception mode is encoded in $.25 i:0 of mtvec; 002 is for straight style and 01two for vectored.

After jumping to the address in mtvec, the exception handler reads the mcause register to examine what caused the exception and responds appropriately (e.g., past reading the keyboard on a hardware interrupt). It then either aborts the program or returns to the program by executing the mret, auto exception return instruction, that jumps to the address in mepc. Property the PC of the excepting teaching in mepc is coordinating to using ra to store the return address during a jal education. Exception handlers must use program registers (x1−x31) to handle exceptions, then they employ the memory pointed to by mscratch to store and restore these registers.

Exception-related registers are specific to the operating mode. Thousand-mode registers are mtvec, mepc, mcause, and mscratch, and Southward-manner registers are sepc, scause, and sscratch. H-mode also has its own registers. Split exception registers defended to each mode provide hardware support for multiple privilege levels.

Read total chapter

URL:

https://www.sciencedirect.com/scientific discipline/commodity/pii/B9780128200643000064

Example Study: Arrangement Design Using the Gumnut Core

Peter J. Ashenden , in The Designer's Guide to VHDL (Tertiary Edition), 2008

Performing a Memory-I/O Instruction

The procedure perform_mem, shown below, performs memory and I/O instructions. The procedure first calculates the effective address past using the perform_alu_op procedure to add the values of the base of operations address register and the deportation. The acquit-in is set to '0', and the carry-out and nil flag results are unused. The procedure so uses the memory-I/O instruction role code to make up one's mind whether to perform a read or write operation using the data retentiveness bus or the I/O port bus. The operations are performed in the aforementioned style as described earlier for fetching an instruction. For memory load and I/O input instructions, the procedure assigns the read data to the destination annals, provided the annals is not r0. For memory store and I/O output instructions, the procedure uses the value from the source register every bit the data for the double-decker write operation.

  process perform_mem is

  variable mem_addr : unsigned_byte;

  variable tmp_Z, tmp_C : std_ulogic;

  begin

       perform_alu_op(fn => alu_fn_add,

                      a => GPR(to_integer(IR_rs)), b => IR_offset,

                      C_in => '0',

                      outcome => mem_addr,

                      Z_out => tmp_Z, C_out => tmp_C);

  case IR_mem_fn is

  when mem_fn_ldm =>

           data_cyc_o <= '1';

           data_stb_o <= '1';

           data_we_o <= '0';

           data_adr_o <= mem_addr;

           ldm_loop : loop

  wait until rising_edge(clk_i);

  if rst_i then

  return;

  finish if;

  go out ldm_loop when data_ack_i;

  end loop ldm_loop;

  if IR_rd /= 0 then

             GPR(to_integer(IR_rd)) := unsigned(data_dat_i);

  finish if;

           data_cyc_o <= '0';

           data_stb_o <= '0';

  when mem_fn_stm =>

           data_cyc_o <= '1';

           data_stb_o <= '1';

           data_we_o <= 'ane';

           data_adr_o <= mem_addr;

           data_dat_o <= std_ulogic_vector(GPR(to_integer(IR_rd)));

           stm_loop : loop

  await until rising_edge(clk_i);

  if rst_i and so

  return;

  end if;

  exit stm_loop when data_ack_i;

  end loop stm_loop;

           data_cyc_o <= '0';

           data_stb_o <= '0';

  when mem_fn_inp =>

           port_cyc_o <= 'i';

           port_stb_o <= 'one';

           port_we_o <= '0';

           port_adr_o <= mem_addr;

           inp_loop : loop

  wait until rising_edge(clk_i);

  if rst_i then

  return;

  cease if;

  exit inp_loop when port_ack_i;

  end loop inp_loop;

  if IR_rd /= 0 and so

             GPR(to_integer(IR_rd)) := unsigned(port_dat_i);

  end if;

           port_cyc_o <= '0';

           port_stb_o <= '0';

  when mem_fn_out =>

           port_cyc_o <= 'i';

           port_stb_o <= '1';

           port_we_o <= '1';

           port_adr_o <= mem_addr;

           port_dat_o <= std_ulogic_vector(GPR(to_integer(IR_rd)));

           out_loop : loop

  expect until rising_edge(clk_i);

  if rst_i and then

  return;

  end if;

  exit out_loop when port_ack_i;

  end loop out_loop;

           port_cyc_o <= '0';

           port_stb_o <= '0';

  when others =>

  report "Program logic mistake in interpreter"

  severity failure;

  end case;

  finish procedure perform_mem;

For I/O input instructions, the procedure sets the port_addr signal to the effective address and asserts the port_read command indicate. It and so enters a loop in which it waits for the adjacent clock edge. When that occurs, if the reset input is agile, the procedure simply returns, assuasive the main interpreter process to reset the processor state. If reset is inactive and the port_ready input is active, the procedure exits from the loop; otherwise, it repeats, waiting for the next clock-edge. On get out from the loop, if the destination annals is not R0, the procedure copies the data from the port_data_in point to the destination register and clears the port_read command signal. I/O write instructions are performed similarly. The difference is that, prior to the loop, data is copied from the source annals to the port_data_out signal and the port_write control signal is asserted. Subsequently the loop, the port_write signal is cleared.

Read total chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780120887859000228

Memory management

G.R. Wilson , in Embedded Systems and Reckoner Compages, 2002

16.6 Two-level paging*

As noted before, in that location is a row, or entry, in the Page Table for each page frame of the physical accost infinite. For a microprocessor that has a 32-bit address and uses a 4 KB (212) page, the number of entries in the Page Tabular array is 232/212 = 220. If each Page Table entry comprises four bytes, the size of the Page Table is iv × ii20 = iv MB. Since each task has its own Folio Tabular array, the memory infinite required for the Folio Tables for all the tasks loaded into the reckoner is embarrassingly large. Indeed, the tables may apply upwardly most of the physically available memory!

To reduce the corporeality of main memory required by the Folio Tables, we volition split each Page Table into pages so that parts of a Page Table itself tin be stored on disk. We can practise this conveniently by using a ii-level paging scheme. Consider that the 20-scrap virtual page number part of the virtual address is carve up into two 10-bit fields, chosen Directory and Tabular array, Figure xvi.5. A annals inside the microprocessor, the Folio Directory Physical Base Address Register, holds the concrete address of the kickoff of the Page Directory, which is a await-upwardly table with just 210 (1024) entries. The x-bit Directory field of the virtual accost is added eight to the contents of the Page Directory Physical Base Address Register to obtain the physical accost of an entry in the Page Directory, which is stored in primary retentiveness. Each of the 1024 Folio Directory Entries contains the address of the start of a Folio Tabular array. In turn, each Page Tabular array holds 1024 Page Table Entries, each of which holds the address of a page frame in primary memory. All these addresses are 20 bits followed past 12 zeros, since they all bespeak to 4K-aligned locations in main retentiveness. As before, the lower 12 $.25 of the virtual accost are used to point the outset within the primary memory folio.

Figure 16.5. 2-level paging scheme

Each table has 1024 entries each of which uses 4 bytes. Thus, each table is 4 KB, then that each table can be stored in a single folio frame. We shop the Page Directory in main memory. If it points to a Page Table that is non currently in main memory, a page fault will be generated and the virtual memory system software will load the required Folio Tabular array, itself one page long, from the disk.

Read total chapter

URL:

https://www.sciencedirect.com/science/article/pii/B9780750650649500171

Embedded Platform Kick Sequence

Peter Barry , Patrick Crowley , in Modern Embedded Computing, 2012

Early Initialization

The early initialization phase readies the bootstrap processor (BSP) and I/O peripherals' base addresses needed to configure the retentiveness controller.

In an UEFI-based arrangement BIOS, the Security (SEC) and the pre-EFI initialization (PEI) phases are unremarkably synonymous with "early initialization." It doesn't matter if legacy or UEFI BIOS is used; the early on init sequence is the same for a given system. The detailed initialization steps are particular to the SOC architectures; the Intel compages organisation consists of the steps outlined in the following sections.

CPU Initialization

This consists of simple configuration of processor and motorcar registers. At that place is no DRAM bachelor, then the code may have to operate in a stackless environment. On about modernistic processors at that place is an internal enshroud that can be configured as RAM (cache as RAM or CAR) to provide a software stack. Developers must write extremely tight lawmaking when using Motorcar, as an eviction would be paradoxical to the system at this point in the kicking sequence; there is no retention to maintain coherency with at this fourth dimension. In that location is a special mode for processors operate in enshroud as RAM called "no evict mode" (NEM), where a cache line miss in the processor will not cause an eviction. The motivation for having a stack frame is to exist able to run C code, hence all the code that runs without stack would use register-based calling conventions. Developing code with an bachelor software stack is much easier, and initialization code oft performs the minimal setup to utilize a stack even prior to DRAM initialization.

IA Microcode Update

The processor may need a microcode update. Microcode is a hardware layer of instructions involved in the implementation of the machine-defined architecture. It is almost prevalent in CISC-based processors. Microcode is developed by the CPU vendor and incorporated into an internal CPU ROM during manufacture. About processors allow that microcode to be updated in the field either through a firmware update or via an Bone update of "configuration data." Intel can provide microcode updates that must be written to the writable microcode store. The updates are encrypted and signed by Intel such that only the processor that the microcode update was designed for tin can authenticate and load that update. On socketed systems, the BIOS may take to comport many flavors of microcode update depending on the number of processor steppings supported. It is important to load the microcode updates early in the kick sequence to limit the exposure of the system to any known bugs in the silicon.

Device Initialization

The device-specific portion of an Intel architecture memory map is highly configurable. Most devices are seen and accessed via a logical PCI bus hierarchy, although a small number may exist memory-mapped devices that have part-specific access mechanisms. Device control registers are mapped to a predefined I/O or MMIO space and can exist set upward before the memory map is configured. This allows the early initial firmware to configure the memory map of the device needed to prepare DRAM. Before DRAM can be configured, the firmware must establish the verbal configuration of DRAM that is on the board. In most embedded cases the retentiveness is soldered downwards on the board and the firmware is configured with the advisable retentiveness configuration with an initialized data structure. The Intel architecture reference platform retentivity map is described in more detail in Figure 6.3 . SOC devices based on other processor architectures typically provide a static address map for all internal peripherals, with external devices connected via a bus interface. The bus-based devices are mapped to a memory range within in the SOC address space. These SOC devices usually provide a configurable fleck select register ready specifying the base of operations address and size of the memory range enabled by the chip select. SOCs based on Intel compages primarily use the logical PCI infrastructure for internal and external devices. The location of the device in the host memory address space is defined by the PCI base address register (BAR) for each of the devices. The device initialization typically enables all the BAR registers for the devices required equally office of the arrangement boot path. BIOS will typically assign all devices in the organisation a PCI base address by writing the appropriate BAR registers.

Effigy six.3. Intel Architecture Memory Map at Power-On.

Retentivity Configuration

The initialization of the memory controller varies considerably depending on the DRAM technology and the capabilities of the retentiveness controller itself. The information on the DRAM controller is often proprietary for SOC devices, and in such cases the initialization reference code is typically supplied past the SOC vendor. This is the case for Intel platforms, and y'all will have to contact Intel to request access to the depression-level information required. At that place is a very wide range of DRAM configuration parameters, such equally number of ranks, 8-flake or 16-fleck addresses, overall memory size, constellation, soldered down or add-in module (DIMM) configurations, page closing policy, and power management. Given that about embedded systems populate soldered down DRAM on the board, the firmware may not demand to discover the configuration at boot fourth dimension. These configurations are known as memory-downwardly. The firmware is specifically built for the target configuration. At current DRAM speeds, the wires between the retentivity controllers comport similar transmission lines; the SOC may provide automatic calibration and runtime control of resistive bounty (RCOMP) and delay locked wait (DLL) capabilities. These capabilities allow the memory controller to modify elements such every bit the drive forcefulness to ensure error-costless operation over time and temperature variations.

If the platform supports add together-in modules for retention, there are a number of standardized class factors for such retention. The modest outline dual in-line retentivity module (SODIMM) is one such form gene often found in embedded systems. The DIMMs provide a serial PROM. The serial PROM devices incorporate the DRAM configuration information. The data are known equally serial presence observe information (SPD data). The firmware reads the SPD information to place the device configuration and subsequently configures the device. The series PROM is connected via I2C/SMBUS; thus the I2C device must exist available in this early initialization phase, so the software can constitute the memory devices on board. In most cases where the memory is soldered downward, the BIOS is configured with the retentivity configuration; however, information technology is too possible for memory-down motherboards to incorporate serial SPD PROM to permit for multiple and updatable retentivity configurations to be handled efficiently by a unmarried BIOS algorithm.

Mail service-Memory Setup

One time the memory controller has been initialized, a number of subsequent events take place. The outset (optional) detail is to run a retentiveness exam. The retentiveness test is best performed at system startup and in detail on cold kickoff of the platform. Unfortunately, memory tests tin can accept quite a long time, and the more than thorough the testing, the longer the examination takes. The embedded designer must brand the trade-off between the robustness of the memory test and the delay in boot time it introduces. Some embedded devices use mistake correction codes (ECC) memory, which may need extra initialization. After ability up, the country of the error correction codes may not reflect the contents of the other memory bytes and all memory must exist written to; writing to memory ensures that the ECC $.25 are valid and sets the ECC bits to the appropriate contents. For security purposes, the retention may need to be zeroed out manually by the BIOS, or in some cases a retentivity controller may contain the feature into hardware to salvage time.

Shadowing

From the reset vector, execution starts off executing direct from the nonvolatile flash storage (NVRAM). This operating mode is known as execute in place (XIP). The read performance of nonvolatile storage is much slower than the read performance of DRAM. So almost early firmware volition copy code from the slower nonvolatile storage into RAM. The firmware starts to run the RAM copy of the firmware. This procedure is sometimes known as shadowing. Shadowing involves having the same contents in RAM and flash; with a change in the address decoders the RAM re-create is logically in front of the flash copy and the program starts to execute from RAM. On other embedded systems, the chip selects ranges that are managed to let the alter from flash to RAM execution. Nearly computing systems run as lilliputian as possible direct from flash. All the same, some constrained (in terms of RAM) embedded platforms execute all the application in place (directly from Flash memory). This is generally an option on very small-scale embedded devices. The Intel architecture platforms by and large exercise not execute in place for anything only the very initial kick steps earlier retentivity accept been configured. The firmware is often compressed. This allows reduction of the NVRAM requirements for the firmware. Clearly, the processor cannot execute a compressed image in place.

There is a trade-off between the storage requirements of a uncompressed firmware image and the time information technology takes to decompress the image. The decompression algorithm may take much longer to load and execute than information technology would for the image to remain uncompressed. Prefetchers in the processor, if enabled, may besides speed up execution in place, and some SOCs have internal NVRAM cache buffers to assist in pipelining the information from the wink to the processor.

Effigy 6.iii shows the memory map at initialization in real mode, which can only access 1 MB of retentivity.

Read total chapter

URL:

https://www.sciencedirect.com/science/commodity/pii/B9780123914903000060

The Retentiveness Protection Unit

Joseph Yiu , in The Definitive Guide to the ARM Cortex-M3 (Second Edition), 2010

13.ii MPU Registers

The MPU contains a number of registers. The first one is the MPU Blazon annals. The MPU Type register can be used to decide whether the MPU is fitted. If the DREGION field is read as 0, the MPU is not implemented (see Tabular array 13.1).

Tabular array 13.ane. MPU Type Register (0xE000ED90)

Bits Name Type Reset Value Clarification
23:16 IREGION R 0 Number of instruction regions supported by this MPU; because ARMv7-M architecture uses a unified MPU, this is e'er 0
fifteen:8 DREGION R 0 or eight Number of regions supported past this MPU; in the Cortex-M3, this is either 0 (MPU not nowadays) or 8 (MPU present)
0 SEPARATE R 0 This is always 0, equally the MPU is unified

The MPU is controlled past a number of registers. The first one is the MPU Control register (see Table 13.2). This register has 3 control $.25. After reset, the reset value of this annals is zero, which disables the MPU. To enable the MPU, the software should prepare upwardly the settings for each MPU regions, and then, prepare the ENABLE bit in the MPU Control register.

Table 13.2. MPU Control Register (0xE000ED94)

Bits Name Type Reset Value Clarification
2 PRIVDEFENA R/Westward 0 Privileged default memory map enable; when set to 1 and if the MPU is enabled, the default memory map will be used for privileged accesses as a background region. If this flake is non ready, the background region is disabled and any access not covered by any enabled region volition cause a fault.
1 HFNMIENA R/W 0 If set to ane, it enables the MPU during the difficult fault handler and nonmaskable interrupt (NMI) handler; otherwise, the MPU is not enabled (bypassed) for the hard fault handler and NMI.
0 ENABLE R/West 0 It enables the MPU if set to i.

By using PRIVDEFENA and if no other regions are fix, privileged programs will be able to access all memory locations, and but user programs volition be blocked. Nonetheless, if other MPU regions are programmed and enabled, they can override the background region. For example, for two systems with similar region setups but merely i with PRIVDEFENA ready to 1 (the right-hand side in Figure xiii.ane), the one with PRIVDEFENA set to i will allow privileged access to background regions.

FIGURE 13.1. The Effect of PRIVDEFENA.

Setting the enable bit in the MPU Control register is usually the concluding stride in the MPU setup code. Otherwise, the MPU might generate faults by accident before the region configuration is washed. In some situations, it might be worth clearing the MPU Enable at the start of the MPU configuration routine to make sure that the MPU faults won't be triggered past accident during setup of MPU regions.

The side by side MPU command register is the MPU Region Number register (see Table xiii.3), earlier each region is fix upwardly, write to this annals to select the region to be programmed.

Tabular array 13.three. MPU Region Number Register (0xE000ED98)

$.25 Name Type Reset Value Description
7:0 REGION R/Due west Select the region that is beingness programmed. Because 8 regions are supported in the Cortex-M3 MPU, only fleck [two:0] of this annals is implemented.

The starting address of each region is defined past the MPU Region Base Address register (encounter Table 13.4). Using the VALID and REGION fields in this register, we can skip the stride of programming the MPU Region Number register. This might reduce the complication of the program code, particularly if the whole MPU setup is defined in a lookup table.

Tabular array 13.four. MPU Region Base Address Annals (0xE000ED9C)

Bits Name Type Reset Value Description
31:N ADDR R/W Base of operations address of the region; N is dependent on the region size—for example, a 64 KB size region will accept a base address field of [31:16].
4 VALID R/Due west If this is i, the REGION divers in chip [iii:0] volition be used in this programming step; otherwise, the region selected by the MPU Region Number register is used.
3:0 REGION R/West This field overrides the MPU Region Number register if VALID is 1; otherwise, information technology is ignored. Because viii regions are supported in the Cortex-M3 MPU, the region number override is ignored if the value of the REGION field is larger than 7.

We also need to ascertain the properties of each region. This is controlled by the MPU Region Base of operations Attribute and Size register (run into Table 13.5).

Table thirteen.5. MPU Region Base Attribute and Size Register (0xE000EDA0)

Bits Name Type Reset Value Description
31:29 Reserved
28 XN R/Westward Educational activity Access Disable (1 = disable instruction fetch from this region; an attempt to do so volition result in a memory management mistake)
27 Reserved
26:24 AP R/W Data Admission Permission field
23:22 Reserved
21:nineteen TEX R/W Blazon Extension field
18 S R/W Shareable
17 C R/West Cacheable
16 B R/W Bufferable
15:viii SRD R/W Subregion disable
7:half-dozen Reserved
v:1 REGION SIZE R/W MPU Protection Region size
0 ENABLE R/West Region enable

The REGION SIZE field (v bits) in the MPU Region Base Attribute and Size register determines the size of the region (see Table thirteen.vi).

Tabular array 13.6. Encoding of REGION Field for Different Memory Region Sizes

REGION Size Size
b00000 Reserved
b00001 Reserved
b00010 Reserved
b00011 Reserved
b00100 32 bytes
b00101 64 bytes
b00110 128 bytes
b00111 256 bytes
b01000 512 bytes
b01001 1 KB
b01010 2 KB
b01011 4 KB
b01100 8 KB
b01101 16 KB
b01110 32 KB
b01111 64 KB
b10000 128 KB
b10001 256 KB
b10010 512 KB
b10011 1 MB
b10100 2 MB
b10101 4 MB
b10110 8 MB
b10111 16 MB
b11000 32 MB
b11001 64 MB
b11010 128 MB
b11011 256 MB
b11100 512 MB
b11101 i GB
b11110 2 GB
b11111 4 GB

The subregion disable field (fleck [xv:viii] of the MPU Region Base Attribute and Size annals) is used to carve up a region into viii equal subregions and and so to define each every bit enabled or disabled. If a subregion is disabled and overlaps another region, the admission rules for the other region are applied. If the subregion is disabled and does not overlap any other region, access to this retentivity range will result in a retentiveness management error. Subregions cannot exist used if the region size is 128 bytes or less. The data Access Permission (AP) field (bit [26:24]) defines the AP of the region (see Table 13.vii).

Table 13.7. Encoding of AP Field for Diverse Admission Permission Configurations

AP Value Privileged Access User Access Description
000 No access No access No access
001 Read/write No access Privileged admission simply
010 Read/write Read only Write in a user program generates a fault
011 Read/write Read/write Full access
100 Unpredictable Unpredictable Unpredictable
101 Read just No access Privileged read only
110 Read but Read only Read simply
111 Read only Read but Read but

The XN (Execute Never) field (bit [28]) decides whether an didactics fetch from this region is allowed. When this field is fix to i, all instructions fetched from this region will generate a memory direction fault when they enter the execution phase.

The TEX, Southward, B, and C fields (bit [21:16]) are more complex. Despite that the Cortex-M3 processor does not have cache, its implementation follows ARMv7-M compages, which tin can back up external cache and more than advanced memory systems. Therefore, the region admission backdrop can be programmed to back up unlike types of memory management models.

In v6 and v7 architecture, the retentivity organization can have ii cache levels: inner enshroud and outer enshroud. They can have different caching policies. Because the Cortex-M3 processor itself does not have a cache controller, the cache policy only affects write buffering in the internal BusMatrix and perchance the memory controller (see Tabular array 13.8). For most microcontrollers, the usage of memory attributes can exist simplified to just a few memory types (encounter Effigy xiii.2).

Table 13.8. ARMv7-M Retentiveness Attributes

TEX C B Clarification Region Shareability
b000 0 0 Strongly ordered (transfers comport out and complete in programmed order) Shareable
b000 0 one Shared device (write can be buffered) Shareable
b000 1 0 Outer and inner write-through; no write allocate [S]
b000 one one Outer and inner write-back; no write allocate [S]
b001 0 0 Outer and inner non cacheable [S]
b001 0 1 Reserved Reserved
b001 one 0 Implementation defined
b001 1 1 Outer and inner write-dorsum; write and read allocate [S]
b010 0 0 Nonshared device Not shared
b010 0 1 Reserved Reserved
b010 i X Reserved Reserved
b1BB A A Cached memory; BB = outer policy, AA = inner policy [Due south]

Notation: [South] indicates that shareability is determined by the South fleck field (shared by multiple processors).

FIGURE 13.2. Commonly Used Retention Attributes in Microcontrollers.

If you lot are using a microcontroller with enshroud memory, so yous should plan the MPU according to the cache policy you desire to apply (east.m., cache disable/write through cache/write back cache). When TEX[two] is 1, the cache policy for outer cache and inner cache is as shown in Table 13.9.

Table 13.ix. Encoding of Inner and Outer Cache Policy When Virtually Significant Fleck of TEX Is Set to 1

Memory Attribute Encoding (AA and BB) Cache Policy
00 Noncacheable
01 Write dorsum, write, and read allocate
10 Write through, no write allocate
eleven Write back, no write allocate

For more than information on cache behavior and enshroud policy, refer to the ARM Architecture Awarding Level Reference Transmission [Ref. 2].

Read total chapter

URL:

https://world wide web.sciencedirect.com/scientific discipline/article/pii/B9781856179638000168