Remove base/allocator and base/debug/thread_heap_usage_tracker
Change-Id: I47e75aa53e9b24fefe6ee3c3d9b96be6b8fe4373
Reviewed-on: https://gn-review.googlesource.com/1540
Reviewed-by: Brett Wilson <brettw@chromium.org>
Commit-Queue: Scott Graham <scottmg@chromium.org>
diff --git a/base/allocator/OWNERS b/base/allocator/OWNERS
deleted file mode 100644
index de658d0..0000000
--- a/base/allocator/OWNERS
+++ /dev/null
@@ -1,4 +0,0 @@
-primiano@chromium.org
-wfh@chromium.org
-
-# COMPONENT: Internals
diff --git a/base/allocator/README.md b/base/allocator/README.md
deleted file mode 100644
index 62b9be6..0000000
--- a/base/allocator/README.md
+++ /dev/null
@@ -1,197 +0,0 @@
-This document describes how malloc / new calls are routed in the various Chrome
-platforms.
-
-Bare in mind that the chromium codebase does not always just use `malloc()`.
-Some examples:
- - Large parts of the renderer (Blink) use two home-brewed allocators,
- PartitionAlloc and BlinkGC (Oilpan).
- - Some subsystems, such as the V8 JavaScript engine, handle memory management
- autonomously.
- - Various parts of the codebase use abstractions such as `SharedMemory` or
- `DiscardableMemory` which, similarly to the above, have their own page-level
- memory management.
-
-Background
-----------
-The `allocator` target defines at compile-time the platform-specific choice of
-the allocator and extra-hooks which services calls to malloc/new. The relevant
-build-time flags involved are `use_allocator` and `use_allocator_shim`.
-
-The default choices are as follows:
-
-**Windows**
-`use_allocator: winheap`, the default Windows heap.
-Additionally, `static_library` (i.e. non-component) builds have a shim
-layer wrapping malloc/new, which is controlled by `use_allocator_shim`.
-The shim layer provides extra security features, such as preventing large
-allocations that can hit signed vs. unsigned bugs in third_party code.
-
-**Linux Desktop / CrOS**
-`use_allocator: tcmalloc`, a forked copy of tcmalloc which resides in
-`third_party/tcmalloc/chromium`. Setting `use_allocator: none` causes the build
-to fall back to the system (Glibc) symbols.
-
-**Android**
-`use_allocator: none`, always use the allocator symbols coming from Android's
-libc (Bionic). As it is developed as part of the OS, it is considered to be
-optimized for small devices and more memory-efficient than other choices.
-The actual implementation backing malloc symbols in Bionic is up to the board
-config and can vary (typically *dlmalloc* or *jemalloc* on most Nexus devices).
-
-**Mac/iOS**
-`use_allocator: none`, we always use the system's allocator implementation.
-
-In addition, when building for `asan` / `msan` both the allocator and the shim
-layer are disabled.
-
-Layering and build deps
------------------------
-The `allocator` target provides both the source files for tcmalloc (where
-applicable) and the linker flags required for the Windows shim layer.
-The `base` target is (almost) the only one depending on `allocator`. No other
-targets should depend on it, with the exception of the very few executables /
-dynamic libraries that don't depend, either directly or indirectly, on `base`
-within the scope of a linker unit.
-
-More importantly, **no other place outside of `/base` should depend on the
-specific allocator** (e.g., directly include `third_party/tcmalloc`).
-If such a functional dependency is required that should be achieved using
-abstractions in `base` (see `/base/allocator/allocator_extension.h` and
-`/base/memory/`)
-
-**Why `base` depends on `allocator`?**
-Because it needs to provide services that depend on the actual allocator
-implementation. In the past `base` used to pretend to be allocator-agnostic
-and get the dependencies injected by other layers. This ended up being an
-inconsistent mess.
-See the [allocator cleanup doc][url-allocator-cleanup] for more context.
-
-Linker unit targets (executables and shared libraries) that depend in some way
-on `base` (most of the targets in the codebase) get automatically the correct
-set of linker flags to pull in tcmalloc or the Windows shim-layer.
-
-
-Source code
------------
-This directory contains just the allocator (i.e. shim) layer that switches
-between the different underlying memory allocation implementations.
-
-The tcmalloc library originates outside of Chromium and exists in
-`../../third_party/tcmalloc` (currently, the actual location is defined in the
-allocator.gyp file). The third party sources use a vendor-branch SCM pattern to
-track Chromium-specific changes independently from upstream changes.
-
-The general intent is to push local changes upstream so that over
-time we no longer need any forked files.
-
-
-Unified allocator shim
-----------------------
-On most platforms, Chrome overrides the malloc / operator new symbols (and
-corresponding free / delete and other variants). This is to enforce security
-checks and lately to enable the
-[memory-infra heap profiler][url-memory-infra-heap-profiler].
-Historically each platform had its special logic for defining the allocator
-symbols in different places of the codebase. The unified allocator shim is
-a project aimed to unify the symbol definition and allocator routing logic in
-a central place.
-
- - Full documentation: [Allocator shim design doc][url-allocator-shim].
- - Current state: Available and enabled by default on Android, CrOS, Linux,
- Mac OS and Windows.
- - Tracking bug: [https://crbug.com/550886][crbug.com/550886].
- - Build-time flag: `use_allocator_shim`.
-
-**Overview of the unified allocator shim**
-The allocator shim consists of three stages:
-```
-+-------------------------+ +-----------------------+ +----------------+
-| malloc & friends | -> | shim layer | -> | Routing to |
-| symbols definition | | implementation | | allocator |
-+-------------------------+ +-----------------------+ +----------------+
-| - libc symbols (malloc, | | - Security checks | | - tcmalloc |
-| calloc, free, ...) | | - Chain of dispatchers| | - glibc |
-| - C++ symbols (operator | | that can intercept | | - Android |
-| new, delete, ...) | | and override | | bionic |
-| - glibc weak symbols | | allocations | | - WinHeap |
-| (__libc_malloc, ...) | +-----------------------+ +----------------+
-+-------------------------+
-```
-
-**1. malloc symbols definition**
-This stage takes care of overriding the symbols `malloc`, `free`,
-`operator new`, `operator delete` and friends and routing those calls inside the
-allocator shim (next point).
-This is taken care of by the headers in `allocator_shim_override_*`.
-
-*On Linux/CrOS*: the allocator symbols are defined as exported global symbols
-in `allocator_shim_override_libc_symbols.h` (for `malloc`, `free` and friends)
-and in `allocator_shim_override_cpp_symbols.h` (for `operator new`,
-`operator delete` and friends).
-This enables proper interposition of malloc symbols referenced by the main
-executable and any third party libraries. Symbol resolution on Linux is a breadth first search that starts from the root link unit, that is the executable
-(see EXECUTABLE AND LINKABLE FORMAT (ELF) - Portable Formats Specification).
-Additionally, when tcmalloc is the default allocator, some extra glibc symbols
-are also defined in `allocator_shim_override_glibc_weak_symbols.h`, for subtle
-reasons explained in that file.
-The Linux/CrOS shim was introduced by
-[crrev.com/1675143004](https://crrev.com/1675143004).
-
-*On Android*: load-time symbol interposition (unlike the Linux/CrOS case) is not
-possible. This is because Android processes are `fork()`-ed from the Android
-zygote, which pre-loads libc.so and only later native code gets loaded via
-`dlopen()` (symbols from `dlopen()`-ed libraries get a different resolution
-scope).
-In this case, the approach instead of wrapping symbol resolution at link time
-(i.e. during the build), via the `--Wl,-wrap,malloc` linker flag.
-The use of this wrapping flag causes:
- - All references to allocator symbols in the Chrome codebase to be rewritten as
- references to `__wrap_malloc` and friends. The `__wrap_malloc` symbols are
- defined in the `allocator_shim_override_linker_wrapped_symbols.h` and
- route allocator calls inside the shim layer.
- - The reference to the original `malloc` symbols (which typically is defined by
- the system's libc.so) are accessible via the special `__real_malloc` and
- friends symbols (which will be relocated, at load time, against `malloc`).
-
-In summary, this approach is transparent to the dynamic loader, which still sees
-undefined symbol references to malloc symbols.
-These symbols will be resolved against libc.so as usual.
-More details in [crrev.com/1719433002](https://crrev.com/1719433002).
-
-**2. Shim layer implementation**
-This stage contains the actual shim implementation. This consists of:
-- A singly linked list of dispatchers (structs with function pointers to `malloc`-like functions). Dispatchers can be dynamically inserted at runtime
-(using the `InsertAllocatorDispatch` API). They can intercept and override
-allocator calls.
-- The security checks (suicide on malloc-failure via `std::new_handler`, etc).
-This happens inside `allocator_shim.cc`
-
-**3. Final allocator routing**
-The final element of the aforementioned dispatcher chain is statically defined
-at build time and ultimately routes the allocator calls to the actual allocator
-(as described in the *Background* section above). This is taken care of by the
-headers in `allocator_shim_default_dispatch_to_*` files.
-
-
-Appendixes
-----------
-**How does the Windows shim layer replace the malloc symbols?**
-The mechanism for hooking LIBCMT in Windows is rather tricky. The core
-problem is that by default, the Windows library does not declare malloc and
-free as weak symbols. Because of this, they cannot be overridden. To work
-around this, we start with the LIBCMT.LIB, and manually remove all allocator
-related functions from it using the visual studio library tool. Once removed,
-we can now link against the library and provide custom versions of the
-allocator related functionality.
-See the script `preb_libc.py` in this folder.
-
-Related links
--------------
-- [Unified allocator shim doc - Feb 2016][url-allocator-shim]
-- [Allocator cleanup doc - Jan 2016][url-allocator-cleanup]
-- [Proposal to use PartitionAlloc as default allocator](https://crbug.com/339604)
-- [Memory-Infra: Tools to profile memory usage in Chrome](/docs/memory-infra/README.md)
-
-[url-allocator-cleanup]: https://docs.google.com/document/d/1V77Kgp_4tfaaWPEZVxNevoD02wXiatnAv7Ssgr0hmjg/edit?usp=sharing
-[url-memory-infra-heap-profiler]: /docs/memory-infra/heap_profiler.md
-[url-allocator-shim]: https://docs.google.com/document/d/1yKlO1AO4XjpDad9rjcBOI15EKdAGsuGO_IeZy0g0kxo/edit?usp=sharing
diff --git a/base/allocator/allocator_check.cc b/base/allocator/allocator_check.cc
deleted file mode 100644
index 57c1eb0..0000000
--- a/base/allocator/allocator_check.cc
+++ /dev/null
@@ -1,41 +0,0 @@
-// Copyright 2016 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#include "base/allocator/allocator_check.h"
-
-#include "build_config.h"
-
-#if defined(OS_WIN)
-#include "base/allocator/winheap_stubs_win.h"
-#endif
-
-#if defined(OS_LINUX)
-#include <malloc.h>
-#endif
-
-#if defined(OS_MACOSX)
-#include "base/allocator/allocator_interception_mac.h"
-#endif
-
-namespace base {
-namespace allocator {
-
-bool IsAllocatorInitialized() {
-#if defined(OS_LINUX) && defined(USE_TCMALLOC) && \
- !defined(MEMORY_TOOL_REPLACES_ALLOCATOR)
-// From third_party/tcmalloc/chromium/src/gperftools/tcmalloc.h.
-// TODO(primiano): replace with an include once base can depend on allocator.
-#define TC_MALLOPT_IS_OVERRIDDEN_BY_TCMALLOC 0xbeef42
- return (mallopt(TC_MALLOPT_IS_OVERRIDDEN_BY_TCMALLOC, 0) ==
- TC_MALLOPT_IS_OVERRIDDEN_BY_TCMALLOC);
-#elif defined(OS_MACOSX) && !defined(MEMORY_TOOL_REPLACES_ALLOCATOR)
- // From allocator_interception_mac.mm.
- return base::allocator::g_replaced_default_zone;
-#else
- return true;
-#endif
-}
-
-} // namespace allocator
-} // namespace base
diff --git a/base/allocator/allocator_check.h b/base/allocator/allocator_check.h
deleted file mode 100644
index cf519fd..0000000
--- a/base/allocator/allocator_check.h
+++ /dev/null
@@ -1,18 +0,0 @@
-// Copyright 2016 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#ifndef BASE_ALLOCATOR_ALLOCATOR_ALLOCATOR_CHECK_H_
-#define BASE_ALLOCATOR_ALLOCATOR_ALLOCATOR_CHECK_H_
-
-#include "base/base_export.h"
-
-namespace base {
-namespace allocator {
-
-BASE_EXPORT bool IsAllocatorInitialized();
-
-} // namespace allocator
-} // namespace base
-
-#endif // BASE_ALLOCATOR_ALLOCATOR_ALLOCATOR_CHECK_H_
diff --git a/base/allocator/allocator_extension.cc b/base/allocator/allocator_extension.cc
deleted file mode 100644
index 9a3d114..0000000
--- a/base/allocator/allocator_extension.cc
+++ /dev/null
@@ -1,60 +0,0 @@
-// Copyright (c) 2012 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#include "base/allocator/allocator_extension.h"
-
-#include "base/logging.h"
-
-#if defined(USE_TCMALLOC)
-#include "third_party/tcmalloc/chromium/src/gperftools/heap-profiler.h"
-#include "third_party/tcmalloc/chromium/src/gperftools/malloc_extension.h"
-#include "third_party/tcmalloc/chromium/src/gperftools/malloc_hook.h"
-#endif
-
-namespace base {
-namespace allocator {
-
-void ReleaseFreeMemory() {
-#if defined(USE_TCMALLOC)
- ::MallocExtension::instance()->ReleaseFreeMemory();
-#endif
-}
-
-bool GetNumericProperty(const char* name, size_t* value) {
-#if defined(USE_TCMALLOC)
- return ::MallocExtension::instance()->GetNumericProperty(name, value);
-#endif
- return false;
-}
-
-bool IsHeapProfilerRunning() {
-#if defined(USE_TCMALLOC)
- return ::IsHeapProfilerRunning();
-#endif
- return false;
-}
-
-void SetHooks(AllocHookFunc alloc_hook, FreeHookFunc free_hook) {
-// TODO(sque): Use allocator shim layer instead.
-#if defined(USE_TCMALLOC)
- // Make sure no hooks get overwritten.
- auto prev_alloc_hook = MallocHook::SetNewHook(alloc_hook);
- if (alloc_hook)
- DCHECK(!prev_alloc_hook);
-
- auto prev_free_hook = MallocHook::SetDeleteHook(free_hook);
- if (free_hook)
- DCHECK(!prev_free_hook);
-#endif
-}
-
-int GetCallStack(void** stack, int max_stack_size) {
-#if defined(USE_TCMALLOC)
- return MallocHook::GetCallerStackTrace(stack, max_stack_size, 0);
-#endif
- return 0;
-}
-
-} // namespace allocator
-} // namespace base
diff --git a/base/allocator/allocator_extension.h b/base/allocator/allocator_extension.h
deleted file mode 100644
index 00d88cd..0000000
--- a/base/allocator/allocator_extension.h
+++ /dev/null
@@ -1,51 +0,0 @@
-// Copyright (c) 2012 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#ifndef BASE_ALLOCATOR_ALLOCATOR_EXTENSION_H_
-#define BASE_ALLOCATOR_ALLOCATOR_EXTENSION_H_
-
-#include <stddef.h> // for size_t
-
-#include "base/base_export.h"
-#include "build_config.h"
-
-namespace base {
-namespace allocator {
-
-// Callback types for alloc and free.
-using AllocHookFunc = void (*)(const void*, size_t);
-using FreeHookFunc = void (*)(const void*);
-
-// Request that the allocator release any free memory it knows about to the
-// system.
-BASE_EXPORT void ReleaseFreeMemory();
-
-// Get the named property's |value|. Returns true if the property is known.
-// Returns false if the property is not a valid property name for the current
-// allocator implementation.
-// |name| or |value| cannot be NULL
-BASE_EXPORT bool GetNumericProperty(const char* name, size_t* value);
-
-BASE_EXPORT bool IsHeapProfilerRunning();
-
-// Register callbacks for alloc and free. Can only store one callback at a time
-// for each of alloc and free.
-BASE_EXPORT void SetHooks(AllocHookFunc alloc_hook, FreeHookFunc free_hook);
-
-// Attempts to unwind the call stack from the current location where this
-// function is being called from. Must be called from a hook function registered
-// by calling SetSingle{Alloc,Free}Hook, directly or indirectly.
-//
-// Arguments:
-// stack: pointer to a pre-allocated array of void*'s.
-// max_stack_size: indicates the size of the array in |stack|.
-//
-// Returns the number of call stack frames stored in |stack|, or 0 if no call
-// stack information is available.
-BASE_EXPORT int GetCallStack(void** stack, int max_stack_size);
-
-} // namespace allocator
-} // namespace base
-
-#endif // BASE_ALLOCATOR_ALLOCATOR_EXTENSION_H_
diff --git a/base/allocator/allocator_interception_mac.h b/base/allocator/allocator_interception_mac.h
deleted file mode 100644
index 68f1d53..0000000
--- a/base/allocator/allocator_interception_mac.h
+++ /dev/null
@@ -1,56 +0,0 @@
-// Copyright 2017 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#ifndef BASE_ALLOCATOR_ALLOCATOR_INTERCEPTION_MAC_H_
-#define BASE_ALLOCATOR_ALLOCATOR_INTERCEPTION_MAC_H_
-
-#include <stddef.h>
-
-#include "base/base_export.h"
-#include "third_party/apple_apsl/malloc.h"
-
-namespace base {
-namespace allocator {
-
-struct MallocZoneFunctions;
-
-// Saves the function pointers currently used by the default zone.
-void StoreFunctionsForDefaultZone();
-
-// Same as StoreFunctionsForDefaultZone, but for all malloc zones.
-void StoreFunctionsForAllZones();
-
-// For all malloc zones that have been stored, replace their functions with
-// |functions|.
-void ReplaceFunctionsForStoredZones(const MallocZoneFunctions* functions);
-
-extern bool g_replaced_default_zone;
-
-// Calls the original implementation of malloc/calloc prior to interception.
-bool UncheckedMallocMac(size_t size, void** result);
-bool UncheckedCallocMac(size_t num_items, size_t size, void** result);
-
-// Intercepts calls to default and purgeable malloc zones. Intercepts Core
-// Foundation and Objective-C allocations.
-// Has no effect on the default malloc zone if the allocator shim already
-// performs that interception.
-BASE_EXPORT void InterceptAllocationsMac();
-
-// Updates all malloc zones to use their original functions.
-// Also calls ClearAllMallocZonesForTesting.
-BASE_EXPORT void UninterceptMallocZonesForTesting();
-
-// Periodically checks for, and shims new malloc zones. Stops checking after 1
-// minute.
-BASE_EXPORT void PeriodicallyShimNewMallocZones();
-
-// Exposed for testing.
-BASE_EXPORT void ShimNewMallocZones();
-BASE_EXPORT void ReplaceZoneFunctions(ChromeMallocZone* zone,
- const MallocZoneFunctions* functions);
-
-} // namespace allocator
-} // namespace base
-
-#endif // BASE_ALLOCATOR_ALLOCATOR_INTERCEPTION_MAC_H_
diff --git a/base/allocator/allocator_interception_mac.mm b/base/allocator/allocator_interception_mac.mm
deleted file mode 100644
index 17cf3f0..0000000
--- a/base/allocator/allocator_interception_mac.mm
+++ /dev/null
@@ -1,567 +0,0 @@
-// Copyright 2017 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-// This file contains all the logic necessary to intercept allocations on
-// macOS. "malloc zones" are an abstraction that allows the process to intercept
-// all malloc-related functions. There is no good mechanism [short of
-// interposition] to determine new malloc zones are added, so there's no clean
-// mechanism to intercept all malloc zones. This file contains logic to
-// intercept the default and purgeable zones, which always exist. A cursory
-// review of Chrome seems to imply that non-default zones are almost never used.
-//
-// This file also contains logic to intercept Core Foundation and Objective-C
-// allocations. The implementations forward to the default malloc zone, so the
-// only reason to intercept these calls is to re-label OOM crashes with slightly
-// more details.
-
-#include "base/allocator/allocator_interception_mac.h"
-
-#include <CoreFoundation/CoreFoundation.h>
-#import <Foundation/Foundation.h>
-#include <errno.h>
-#include <mach/mach.h>
-#include <mach/mach_vm.h>
-#import <objc/runtime.h>
-#include <stddef.h>
-
-#include <new>
-
-#include "base/allocator/malloc_zone_functions_mac.h"
-#include "base/bind.h"
-#include "base/logging.h"
-#include "base/mac/mac_util.h"
-#include "base/mac/mach_logging.h"
-#include "base/process/memory.h"
-#include "base/scoped_clear_errno.h"
-#include "base/threading/sequenced_task_runner_handle.h"
-#include "build_config.h"
-#include "third_party/apple_apsl/CFBase.h"
-
-namespace base {
-namespace allocator {
-
-bool g_replaced_default_zone = false;
-
-namespace {
-
-bool g_oom_killer_enabled;
-
-// Starting with Mac OS X 10.7, the zone allocators set up by the system are
-// read-only, to prevent them from being overwritten in an attack. However,
-// blindly unprotecting and reprotecting the zone allocators fails with
-// GuardMalloc because GuardMalloc sets up its zone allocator using a block of
-// memory in its bss. Explicit saving/restoring of the protection is required.
-//
-// This function takes a pointer to a malloc zone, de-protects it if necessary,
-// and returns (in the out parameters) a region of memory (if any) to be
-// re-protected when modifications are complete. This approach assumes that
-// there is no contention for the protection of this memory.
-void DeprotectMallocZone(ChromeMallocZone* default_zone,
- mach_vm_address_t* reprotection_start,
- mach_vm_size_t* reprotection_length,
- vm_prot_t* reprotection_value) {
- mach_port_t unused;
- *reprotection_start = reinterpret_cast<mach_vm_address_t>(default_zone);
- struct vm_region_basic_info_64 info;
- mach_msg_type_number_t count = VM_REGION_BASIC_INFO_COUNT_64;
- kern_return_t result = mach_vm_region(
- mach_task_self(), reprotection_start, reprotection_length,
- VM_REGION_BASIC_INFO_64, reinterpret_cast<vm_region_info_t>(&info),
- &count, &unused);
- MACH_CHECK(result == KERN_SUCCESS, result) << "mach_vm_region";
-
- // The kernel always returns a null object for VM_REGION_BASIC_INFO_64, but
- // balance it with a deallocate in case this ever changes. See 10.9.2
- // xnu-2422.90.20/osfmk/vm/vm_map.c vm_map_region.
- mach_port_deallocate(mach_task_self(), unused);
-
- // Does the region fully enclose the zone pointers? Possibly unwarranted
- // simplification used: using the size of a full version 8 malloc zone rather
- // than the actual smaller size if the passed-in zone is not version 8.
- CHECK(*reprotection_start <=
- reinterpret_cast<mach_vm_address_t>(default_zone));
- mach_vm_size_t zone_offset =
- reinterpret_cast<mach_vm_size_t>(default_zone) -
- reinterpret_cast<mach_vm_size_t>(*reprotection_start);
- CHECK(zone_offset + sizeof(ChromeMallocZone) <= *reprotection_length);
-
- if (info.protection & VM_PROT_WRITE) {
- // No change needed; the zone is already writable.
- *reprotection_start = 0;
- *reprotection_length = 0;
- *reprotection_value = VM_PROT_NONE;
- } else {
- *reprotection_value = info.protection;
- result = mach_vm_protect(mach_task_self(), *reprotection_start,
- *reprotection_length, false,
- info.protection | VM_PROT_WRITE);
- MACH_CHECK(result == KERN_SUCCESS, result) << "mach_vm_protect";
- }
-}
-
-#if !defined(ADDRESS_SANITIZER)
-
-MallocZoneFunctions g_old_zone;
-MallocZoneFunctions g_old_purgeable_zone;
-
-void* oom_killer_malloc(struct _malloc_zone_t* zone, size_t size) {
- void* result = g_old_zone.malloc(zone, size);
- if (!result && size)
- TerminateBecauseOutOfMemory(size);
- return result;
-}
-
-void* oom_killer_calloc(struct _malloc_zone_t* zone,
- size_t num_items,
- size_t size) {
- void* result = g_old_zone.calloc(zone, num_items, size);
- if (!result && num_items && size)
- TerminateBecauseOutOfMemory(num_items * size);
- return result;
-}
-
-void* oom_killer_valloc(struct _malloc_zone_t* zone, size_t size) {
- void* result = g_old_zone.valloc(zone, size);
- if (!result && size)
- TerminateBecauseOutOfMemory(size);
- return result;
-}
-
-void oom_killer_free(struct _malloc_zone_t* zone, void* ptr) {
- g_old_zone.free(zone, ptr);
-}
-
-void* oom_killer_realloc(struct _malloc_zone_t* zone, void* ptr, size_t size) {
- void* result = g_old_zone.realloc(zone, ptr, size);
- if (!result && size)
- TerminateBecauseOutOfMemory(size);
- return result;
-}
-
-void* oom_killer_memalign(struct _malloc_zone_t* zone,
- size_t alignment,
- size_t size) {
- void* result = g_old_zone.memalign(zone, alignment, size);
- // Only die if posix_memalign would have returned ENOMEM, since there are
- // other reasons why NULL might be returned (see
- // http://opensource.apple.com/source/Libc/Libc-583/gen/malloc.c ).
- if (!result && size && alignment >= sizeof(void*) &&
- (alignment & (alignment - 1)) == 0) {
- TerminateBecauseOutOfMemory(size);
- }
- return result;
-}
-
-void* oom_killer_malloc_purgeable(struct _malloc_zone_t* zone, size_t size) {
- void* result = g_old_purgeable_zone.malloc(zone, size);
- if (!result && size)
- TerminateBecauseOutOfMemory(size);
- return result;
-}
-
-void* oom_killer_calloc_purgeable(struct _malloc_zone_t* zone,
- size_t num_items,
- size_t size) {
- void* result = g_old_purgeable_zone.calloc(zone, num_items, size);
- if (!result && num_items && size)
- TerminateBecauseOutOfMemory(num_items * size);
- return result;
-}
-
-void* oom_killer_valloc_purgeable(struct _malloc_zone_t* zone, size_t size) {
- void* result = g_old_purgeable_zone.valloc(zone, size);
- if (!result && size)
- TerminateBecauseOutOfMemory(size);
- return result;
-}
-
-void oom_killer_free_purgeable(struct _malloc_zone_t* zone, void* ptr) {
- g_old_purgeable_zone.free(zone, ptr);
-}
-
-void* oom_killer_realloc_purgeable(struct _malloc_zone_t* zone,
- void* ptr,
- size_t size) {
- void* result = g_old_purgeable_zone.realloc(zone, ptr, size);
- if (!result && size)
- TerminateBecauseOutOfMemory(size);
- return result;
-}
-
-void* oom_killer_memalign_purgeable(struct _malloc_zone_t* zone,
- size_t alignment,
- size_t size) {
- void* result = g_old_purgeable_zone.memalign(zone, alignment, size);
- // Only die if posix_memalign would have returned ENOMEM, since there are
- // other reasons why NULL might be returned (see
- // http://opensource.apple.com/source/Libc/Libc-583/gen/malloc.c ).
- if (!result && size && alignment >= sizeof(void*) &&
- (alignment & (alignment - 1)) == 0) {
- TerminateBecauseOutOfMemory(size);
- }
- return result;
-}
-
-#endif // !defined(ADDRESS_SANITIZER)
-
-#if !defined(ADDRESS_SANITIZER)
-
-// === Core Foundation CFAllocators ===
-
-bool CanGetContextForCFAllocator() {
- return !base::mac::IsOSLaterThan10_13_DontCallThis();
-}
-
-CFAllocatorContext* ContextForCFAllocator(CFAllocatorRef allocator) {
- ChromeCFAllocatorLions* our_allocator = const_cast<ChromeCFAllocatorLions*>(
- reinterpret_cast<const ChromeCFAllocatorLions*>(allocator));
- return &our_allocator->_context;
-}
-
-CFAllocatorAllocateCallBack g_old_cfallocator_system_default;
-CFAllocatorAllocateCallBack g_old_cfallocator_malloc;
-CFAllocatorAllocateCallBack g_old_cfallocator_malloc_zone;
-
-void* oom_killer_cfallocator_system_default(CFIndex alloc_size,
- CFOptionFlags hint,
- void* info) {
- void* result = g_old_cfallocator_system_default(alloc_size, hint, info);
- if (!result)
- TerminateBecauseOutOfMemory(alloc_size);
- return result;
-}
-
-void* oom_killer_cfallocator_malloc(CFIndex alloc_size,
- CFOptionFlags hint,
- void* info) {
- void* result = g_old_cfallocator_malloc(alloc_size, hint, info);
- if (!result)
- TerminateBecauseOutOfMemory(alloc_size);
- return result;
-}
-
-void* oom_killer_cfallocator_malloc_zone(CFIndex alloc_size,
- CFOptionFlags hint,
- void* info) {
- void* result = g_old_cfallocator_malloc_zone(alloc_size, hint, info);
- if (!result)
- TerminateBecauseOutOfMemory(alloc_size);
- return result;
-}
-
-#endif // !defined(ADDRESS_SANITIZER)
-
-// === Cocoa NSObject allocation ===
-
-typedef id (*allocWithZone_t)(id, SEL, NSZone*);
-allocWithZone_t g_old_allocWithZone;
-
-id oom_killer_allocWithZone(id self, SEL _cmd, NSZone* zone) {
- id result = g_old_allocWithZone(self, _cmd, zone);
- if (!result)
- TerminateBecauseOutOfMemory(0);
- return result;
-}
-
-void UninterceptMallocZoneForTesting(struct _malloc_zone_t* zone) {
- ChromeMallocZone* chrome_zone = reinterpret_cast<ChromeMallocZone*>(zone);
- if (!IsMallocZoneAlreadyStored(chrome_zone))
- return;
- MallocZoneFunctions& functions = GetFunctionsForZone(zone);
- ReplaceZoneFunctions(chrome_zone, &functions);
-}
-
-} // namespace
-
-bool UncheckedMallocMac(size_t size, void** result) {
-#if defined(ADDRESS_SANITIZER)
- *result = malloc(size);
-#else
- if (g_old_zone.malloc) {
- *result = g_old_zone.malloc(malloc_default_zone(), size);
- } else {
- *result = malloc(size);
- }
-#endif // defined(ADDRESS_SANITIZER)
-
- return *result != NULL;
-}
-
-bool UncheckedCallocMac(size_t num_items, size_t size, void** result) {
-#if defined(ADDRESS_SANITIZER)
- *result = calloc(num_items, size);
-#else
- if (g_old_zone.calloc) {
- *result = g_old_zone.calloc(malloc_default_zone(), num_items, size);
- } else {
- *result = calloc(num_items, size);
- }
-#endif // defined(ADDRESS_SANITIZER)
-
- return *result != NULL;
-}
-
-void StoreFunctionsForDefaultZone() {
- ChromeMallocZone* default_zone = reinterpret_cast<ChromeMallocZone*>(
- malloc_default_zone());
- StoreMallocZone(default_zone);
-}
-
-void StoreFunctionsForAllZones() {
- // This ensures that the default zone is always at the front of the array,
- // which is important for performance.
- StoreFunctionsForDefaultZone();
-
- vm_address_t* zones;
- unsigned int count;
- kern_return_t kr = malloc_get_all_zones(mach_task_self(), 0, &zones, &count);
- if (kr != KERN_SUCCESS)
- return;
- for (unsigned int i = 0; i < count; ++i) {
- ChromeMallocZone* zone = reinterpret_cast<ChromeMallocZone*>(zones[i]);
- StoreMallocZone(zone);
- }
-}
-
-void ReplaceFunctionsForStoredZones(const MallocZoneFunctions* functions) {
- // The default zone does not get returned in malloc_get_all_zones().
- ChromeMallocZone* default_zone =
- reinterpret_cast<ChromeMallocZone*>(malloc_default_zone());
- if (DoesMallocZoneNeedReplacing(default_zone, functions)) {
- ReplaceZoneFunctions(default_zone, functions);
- }
-
- vm_address_t* zones;
- unsigned int count;
- kern_return_t kr =
- malloc_get_all_zones(mach_task_self(), nullptr, &zones, &count);
- if (kr != KERN_SUCCESS)
- return;
- for (unsigned int i = 0; i < count; ++i) {
- ChromeMallocZone* zone = reinterpret_cast<ChromeMallocZone*>(zones[i]);
- if (DoesMallocZoneNeedReplacing(zone, functions)) {
- ReplaceZoneFunctions(zone, functions);
- }
- }
- g_replaced_default_zone = true;
-}
-
-void InterceptAllocationsMac() {
- if (g_oom_killer_enabled)
- return;
-
- g_oom_killer_enabled = true;
-
-// === C malloc/calloc/valloc/realloc/posix_memalign ===
-
-// This approach is not perfect, as requests for amounts of memory larger than
-// MALLOC_ABSOLUTE_MAX_SIZE (currently SIZE_T_MAX - (2 * PAGE_SIZE)) will
-// still fail with a NULL rather than dying (see
-// http://opensource.apple.com/source/Libc/Libc-583/gen/malloc.c for details).
-// Unfortunately, it's the best we can do. Also note that this does not affect
-// allocations from non-default zones.
-
-#if !defined(ADDRESS_SANITIZER)
- // Don't do anything special on OOM for the malloc zones replaced by
- // AddressSanitizer, as modifying or protecting them may not work correctly.
- ChromeMallocZone* default_zone =
- reinterpret_cast<ChromeMallocZone*>(malloc_default_zone());
- if (!IsMallocZoneAlreadyStored(default_zone)) {
- StoreZoneFunctions(default_zone, &g_old_zone);
- MallocZoneFunctions new_functions = {};
- new_functions.malloc = oom_killer_malloc;
- new_functions.calloc = oom_killer_calloc;
- new_functions.valloc = oom_killer_valloc;
- new_functions.free = oom_killer_free;
- new_functions.realloc = oom_killer_realloc;
- new_functions.memalign = oom_killer_memalign;
-
- ReplaceZoneFunctions(default_zone, &new_functions);
- g_replaced_default_zone = true;
- }
-
- ChromeMallocZone* purgeable_zone =
- reinterpret_cast<ChromeMallocZone*>(malloc_default_purgeable_zone());
- if (purgeable_zone && !IsMallocZoneAlreadyStored(purgeable_zone)) {
- StoreZoneFunctions(purgeable_zone, &g_old_purgeable_zone);
- MallocZoneFunctions new_functions = {};
- new_functions.malloc = oom_killer_malloc_purgeable;
- new_functions.calloc = oom_killer_calloc_purgeable;
- new_functions.valloc = oom_killer_valloc_purgeable;
- new_functions.free = oom_killer_free_purgeable;
- new_functions.realloc = oom_killer_realloc_purgeable;
- new_functions.memalign = oom_killer_memalign_purgeable;
- ReplaceZoneFunctions(purgeable_zone, &new_functions);
- }
-#endif
-
- // === C malloc_zone_batch_malloc ===
-
- // batch_malloc is omitted because the default malloc zone's implementation
- // only supports batch_malloc for "tiny" allocations from the free list. It
- // will fail for allocations larger than "tiny", and will only allocate as
- // many blocks as it's able to from the free list. These factors mean that it
- // can return less than the requested memory even in a non-out-of-memory
- // situation. There's no good way to detect whether a batch_malloc failure is
- // due to these other factors, or due to genuine memory or address space
- // exhaustion. The fact that it only allocates space from the "tiny" free list
- // means that it's likely that a failure will not be due to memory exhaustion.
- // Similarly, these constraints on batch_malloc mean that callers must always
- // be expecting to receive less memory than was requested, even in situations
- // where memory pressure is not a concern. Finally, the only public interface
- // to batch_malloc is malloc_zone_batch_malloc, which is specific to the
- // system's malloc implementation. It's unlikely that anyone's even heard of
- // it.
-
-#ifndef ADDRESS_SANITIZER
- // === Core Foundation CFAllocators ===
-
- // This will not catch allocation done by custom allocators, but will catch
- // all allocation done by system-provided ones.
-
- CHECK(!g_old_cfallocator_system_default && !g_old_cfallocator_malloc &&
- !g_old_cfallocator_malloc_zone)
- << "Old allocators unexpectedly non-null";
-
- bool cf_allocator_internals_known = CanGetContextForCFAllocator();
-
- if (cf_allocator_internals_known) {
- CFAllocatorContext* context =
- ContextForCFAllocator(kCFAllocatorSystemDefault);
- CHECK(context) << "Failed to get context for kCFAllocatorSystemDefault.";
- g_old_cfallocator_system_default = context->allocate;
- CHECK(g_old_cfallocator_system_default)
- << "Failed to get kCFAllocatorSystemDefault allocation function.";
- context->allocate = oom_killer_cfallocator_system_default;
-
- context = ContextForCFAllocator(kCFAllocatorMalloc);
- CHECK(context) << "Failed to get context for kCFAllocatorMalloc.";
- g_old_cfallocator_malloc = context->allocate;
- CHECK(g_old_cfallocator_malloc)
- << "Failed to get kCFAllocatorMalloc allocation function.";
- context->allocate = oom_killer_cfallocator_malloc;
-
- context = ContextForCFAllocator(kCFAllocatorMallocZone);
- CHECK(context) << "Failed to get context for kCFAllocatorMallocZone.";
- g_old_cfallocator_malloc_zone = context->allocate;
- CHECK(g_old_cfallocator_malloc_zone)
- << "Failed to get kCFAllocatorMallocZone allocation function.";
- context->allocate = oom_killer_cfallocator_malloc_zone;
- } else {
- DLOG(WARNING) << "Internals of CFAllocator not known; out-of-memory "
- "failures via CFAllocator will not result in termination. "
- "http://crbug.com/45650";
- }
-#endif
-
- // === Cocoa NSObject allocation ===
-
- // Note that both +[NSObject new] and +[NSObject alloc] call through to
- // +[NSObject allocWithZone:].
-
- CHECK(!g_old_allocWithZone) << "Old allocator unexpectedly non-null";
-
- Class nsobject_class = [NSObject class];
- Method orig_method =
- class_getClassMethod(nsobject_class, @selector(allocWithZone:));
- g_old_allocWithZone =
- reinterpret_cast<allocWithZone_t>(method_getImplementation(orig_method));
- CHECK(g_old_allocWithZone)
- << "Failed to get allocWithZone allocation function.";
- method_setImplementation(orig_method,
- reinterpret_cast<IMP>(oom_killer_allocWithZone));
-}
-
-void UninterceptMallocZonesForTesting() {
- UninterceptMallocZoneForTesting(malloc_default_zone());
- vm_address_t* zones;
- unsigned int count;
- kern_return_t kr = malloc_get_all_zones(mach_task_self(), 0, &zones, &count);
- CHECK(kr == KERN_SUCCESS);
- for (unsigned int i = 0; i < count; ++i) {
- UninterceptMallocZoneForTesting(
- reinterpret_cast<struct _malloc_zone_t*>(zones[i]));
- }
-
- ClearAllMallocZonesForTesting();
-}
-
-namespace {
-
-void ShimNewMallocZonesAndReschedule(base::Time end_time,
- base::TimeDelta delay) {
- ShimNewMallocZones();
-
- if (base::Time::Now() > end_time)
- return;
-
- base::TimeDelta next_delay = delay * 2;
- SequencedTaskRunnerHandle::Get()->PostDelayedTask(
- FROM_HERE,
- base::Bind(&ShimNewMallocZonesAndReschedule, end_time, next_delay),
- delay);
-}
-
-} // namespace
-
-void PeriodicallyShimNewMallocZones() {
- base::Time end_time = base::Time::Now() + base::TimeDelta::FromMinutes(1);
- base::TimeDelta initial_delay = base::TimeDelta::FromSeconds(1);
- ShimNewMallocZonesAndReschedule(end_time, initial_delay);
-}
-
-void ShimNewMallocZones() {
- StoreFunctionsForAllZones();
-
- // Use the functions for the default zone as a template to replace those
- // new zones.
- ChromeMallocZone* default_zone =
- reinterpret_cast<ChromeMallocZone*>(malloc_default_zone());
- DCHECK(IsMallocZoneAlreadyStored(default_zone));
-
- MallocZoneFunctions new_functions;
- StoreZoneFunctions(default_zone, &new_functions);
- ReplaceFunctionsForStoredZones(&new_functions);
-}
-
-void ReplaceZoneFunctions(ChromeMallocZone* zone,
- const MallocZoneFunctions* functions) {
- // Remove protection.
- mach_vm_address_t reprotection_start = 0;
- mach_vm_size_t reprotection_length = 0;
- vm_prot_t reprotection_value = VM_PROT_NONE;
- DeprotectMallocZone(zone, &reprotection_start, &reprotection_length,
- &reprotection_value);
-
- CHECK(functions->malloc && functions->calloc && functions->valloc &&
- functions->free && functions->realloc);
- zone->malloc = functions->malloc;
- zone->calloc = functions->calloc;
- zone->valloc = functions->valloc;
- zone->free = functions->free;
- zone->realloc = functions->realloc;
- if (functions->batch_malloc)
- zone->batch_malloc = functions->batch_malloc;
- if (functions->batch_free)
- zone->batch_free = functions->batch_free;
- if (functions->size)
- zone->size = functions->size;
- if (zone->version >= 5 && functions->memalign) {
- zone->memalign = functions->memalign;
- }
- if (zone->version >= 6 && functions->free_definite_size) {
- zone->free_definite_size = functions->free_definite_size;
- }
-
- // Restore protection if it was active.
- if (reprotection_start) {
- kern_return_t result =
- mach_vm_protect(mach_task_self(), reprotection_start,
- reprotection_length, false, reprotection_value);
- MACH_CHECK(result == KERN_SUCCESS, result) << "mach_vm_protect";
- }
-}
-
-} // namespace allocator
-} // namespace base
diff --git a/base/allocator/allocator_interception_mac_unittest.mm b/base/allocator/allocator_interception_mac_unittest.mm
deleted file mode 100644
index c919ca0..0000000
--- a/base/allocator/allocator_interception_mac_unittest.mm
+++ /dev/null
@@ -1,64 +0,0 @@
-// Copyright 2017 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#include <mach/mach.h>
-
-#include "base/allocator/allocator_interception_mac.h"
-#include "base/allocator/allocator_shim.h"
-#include "base/allocator/malloc_zone_functions_mac.h"
-#include "testing/gtest/include/gtest/gtest.h"
-
-namespace base {
-namespace allocator {
-
-namespace {
-void ResetMallocZone(ChromeMallocZone* zone) {
- MallocZoneFunctions& functions = GetFunctionsForZone(zone);
- ReplaceZoneFunctions(zone, &functions);
-}
-
-void ResetAllMallocZones() {
- ChromeMallocZone* default_malloc_zone =
- reinterpret_cast<ChromeMallocZone*>(malloc_default_zone());
- ResetMallocZone(default_malloc_zone);
-
- vm_address_t* zones;
- unsigned int count;
- kern_return_t kr = malloc_get_all_zones(mach_task_self(), 0, &zones, &count);
- if (kr != KERN_SUCCESS)
- return;
- for (unsigned int i = 0; i < count; ++i) {
- ChromeMallocZone* zone = reinterpret_cast<ChromeMallocZone*>(zones[i]);
- ResetMallocZone(zone);
- }
-}
-} // namespace
-
-class AllocatorInterceptionTest : public testing::Test {
- protected:
- void TearDown() override {
- ResetAllMallocZones();
- ClearAllMallocZonesForTesting();
- }
-};
-
-#if !defined(MEMORY_TOOL_REPLACES_ALLOCATOR)
-TEST_F(AllocatorInterceptionTest, ShimNewMallocZones) {
- InitializeAllocatorShim();
- ChromeMallocZone* default_malloc_zone =
- reinterpret_cast<ChromeMallocZone*>(malloc_default_zone());
-
- malloc_zone_t new_zone;
- memset(&new_zone, 1, sizeof(malloc_zone_t));
- malloc_zone_register(&new_zone);
- EXPECT_NE(new_zone.malloc, default_malloc_zone->malloc);
- ShimNewMallocZones();
- EXPECT_EQ(new_zone.malloc, default_malloc_zone->malloc);
-
- malloc_zone_unregister(&new_zone);
-}
-#endif
-
-} // namespace allocator
-} // namespace base
diff --git a/base/allocator/allocator_shim.cc b/base/allocator/allocator_shim.cc
deleted file mode 100644
index d0205ca..0000000
--- a/base/allocator/allocator_shim.cc
+++ /dev/null
@@ -1,336 +0,0 @@
-// Copyright 2016 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#include "base/allocator/allocator_shim.h"
-
-#include <errno.h>
-
-#include <new>
-
-#include "base/atomicops.h"
-#include "base/logging.h"
-#include "base/macros.h"
-#include "base/process/process_metrics.h"
-#include "base/threading/platform_thread.h"
-#include "build_config.h"
-
-#if !defined(OS_WIN)
-#include <unistd.h>
-#else
-#include "base/allocator/winheap_stubs_win.h"
-#endif
-
-#if defined(OS_MACOSX)
-#include <malloc/malloc.h>
-
-#include "base/allocator/allocator_interception_mac.h"
-#endif
-
-// No calls to malloc / new in this file. They would would cause re-entrancy of
-// the shim, which is hard to deal with. Keep this code as simple as possible
-// and don't use any external C++ object here, not even //base ones. Even if
-// they are safe to use today, in future they might be refactored.
-
-namespace {
-
-using namespace base;
-
-subtle::AtomicWord g_chain_head = reinterpret_cast<subtle::AtomicWord>(
- &allocator::AllocatorDispatch::default_dispatch);
-
-bool g_call_new_handler_on_malloc_failure = false;
-
-inline size_t GetCachedPageSize() {
- static size_t pagesize = 0;
- if (!pagesize)
- pagesize = base::GetPageSize();
- return pagesize;
-}
-
-// Calls the std::new handler thread-safely. Returns true if a new_handler was
-// set and called, false if no new_handler was set.
-bool CallNewHandler(size_t size) {
-#if defined(OS_WIN)
- return base::allocator::WinCallNewHandler(size);
-#else
- std::new_handler nh = std::get_new_handler();
- if (!nh)
- return false;
- (*nh)();
- // Assume the new_handler will abort if it fails. Exception are disabled and
- // we don't support the case of a new_handler throwing std::bad_balloc.
- return true;
-#endif
-}
-
-inline const allocator::AllocatorDispatch* GetChainHead() {
- // TODO(primiano): Just use NoBarrier_Load once crbug.com/593344 is fixed.
- // Unfortunately due to that bug NoBarrier_Load() is mistakenly fully
- // barriered on Linux+Clang, and that causes visible perf regressons.
- return reinterpret_cast<const allocator::AllocatorDispatch*>(
-#if defined(OS_LINUX) && defined(__clang__)
- *static_cast<const volatile subtle::AtomicWord*>(&g_chain_head)
-#else
- subtle::NoBarrier_Load(&g_chain_head)
-#endif
- );
-}
-
-} // namespace
-
-namespace base {
-namespace allocator {
-
-void SetCallNewHandlerOnMallocFailure(bool value) {
- g_call_new_handler_on_malloc_failure = value;
-}
-
-void* UncheckedAlloc(size_t size) {
- const allocator::AllocatorDispatch* const chain_head = GetChainHead();
- return chain_head->alloc_function(chain_head, size, nullptr);
-}
-
-void InsertAllocatorDispatch(AllocatorDispatch* dispatch) {
- // Loop in case of (an unlikely) race on setting the list head.
- size_t kMaxRetries = 7;
- for (size_t i = 0; i < kMaxRetries; ++i) {
- const AllocatorDispatch* chain_head = GetChainHead();
- dispatch->next = chain_head;
-
- // This function guarantees to be thread-safe w.r.t. concurrent
- // insertions. It also has to guarantee that all the threads always
- // see a consistent chain, hence the MemoryBarrier() below.
- // InsertAllocatorDispatch() is NOT a fastpath, as opposite to malloc(), so
- // we don't really want this to be a release-store with a corresponding
- // acquire-load during malloc().
- subtle::MemoryBarrier();
- subtle::AtomicWord old_value =
- reinterpret_cast<subtle::AtomicWord>(chain_head);
- // Set the chain head to the new dispatch atomically. If we lose the race,
- // the comparison will fail, and the new head of chain will be returned.
- if (subtle::NoBarrier_CompareAndSwap(
- &g_chain_head, old_value,
- reinterpret_cast<subtle::AtomicWord>(dispatch)) == old_value) {
- // Success.
- return;
- }
- }
-
- CHECK(false); // Too many retries, this shouldn't happen.
-}
-
-void RemoveAllocatorDispatchForTesting(AllocatorDispatch* dispatch) {
- DCHECK_EQ(GetChainHead(), dispatch);
- subtle::NoBarrier_Store(&g_chain_head,
- reinterpret_cast<subtle::AtomicWord>(dispatch->next));
-}
-
-} // namespace allocator
-} // namespace base
-
-// The Shim* functions below are the entry-points into the shim-layer and
-// are supposed to be invoked by the allocator_shim_override_*
-// headers to route the malloc / new symbols through the shim layer.
-// They are defined as ALWAYS_INLINE in order to remove a level of indirection
-// between the system-defined entry points and the shim implementations.
-extern "C" {
-
-// The general pattern for allocations is:
-// - Try to allocate, if succeded return the pointer.
-// - If the allocation failed:
-// - Call the std::new_handler if it was a C++ allocation.
-// - Call the std::new_handler if it was a malloc() (or calloc() or similar)
-// AND SetCallNewHandlerOnMallocFailure(true).
-// - If the std::new_handler is NOT set just return nullptr.
-// - If the std::new_handler is set:
-// - Assume it will abort() if it fails (very likely the new_handler will
-// just suicide priting a message).
-// - Assume it did succeed if it returns, in which case reattempt the alloc.
-
-ALWAYS_INLINE void* ShimCppNew(size_t size) {
- const allocator::AllocatorDispatch* const chain_head = GetChainHead();
- void* ptr;
- do {
- void* context = nullptr;
-#if defined(OS_MACOSX)
- context = malloc_default_zone();
-#endif
- ptr = chain_head->alloc_function(chain_head, size, context);
- } while (!ptr && CallNewHandler(size));
- return ptr;
-}
-
-ALWAYS_INLINE void ShimCppDelete(void* address) {
- void* context = nullptr;
-#if defined(OS_MACOSX)
- context = malloc_default_zone();
-#endif
- const allocator::AllocatorDispatch* const chain_head = GetChainHead();
- return chain_head->free_function(chain_head, address, context);
-}
-
-ALWAYS_INLINE void* ShimMalloc(size_t size, void* context) {
- const allocator::AllocatorDispatch* const chain_head = GetChainHead();
- void* ptr;
- do {
- ptr = chain_head->alloc_function(chain_head, size, context);
- } while (!ptr && g_call_new_handler_on_malloc_failure &&
- CallNewHandler(size));
- return ptr;
-}
-
-ALWAYS_INLINE void* ShimCalloc(size_t n, size_t size, void* context) {
- const allocator::AllocatorDispatch* const chain_head = GetChainHead();
- void* ptr;
- do {
- ptr = chain_head->alloc_zero_initialized_function(chain_head, n, size,
- context);
- } while (!ptr && g_call_new_handler_on_malloc_failure &&
- CallNewHandler(size));
- return ptr;
-}
-
-ALWAYS_INLINE void* ShimRealloc(void* address, size_t size, void* context) {
- // realloc(size == 0) means free() and might return a nullptr. We should
- // not call the std::new_handler in that case, though.
- const allocator::AllocatorDispatch* const chain_head = GetChainHead();
- void* ptr;
- do {
- ptr = chain_head->realloc_function(chain_head, address, size, context);
- } while (!ptr && size && g_call_new_handler_on_malloc_failure &&
- CallNewHandler(size));
- return ptr;
-}
-
-ALWAYS_INLINE void* ShimMemalign(size_t alignment, size_t size, void* context) {
- const allocator::AllocatorDispatch* const chain_head = GetChainHead();
- void* ptr;
- do {
- ptr = chain_head->alloc_aligned_function(chain_head, alignment, size,
- context);
- } while (!ptr && g_call_new_handler_on_malloc_failure &&
- CallNewHandler(size));
- return ptr;
-}
-
-ALWAYS_INLINE int ShimPosixMemalign(void** res, size_t alignment, size_t size) {
- // posix_memalign is supposed to check the arguments. See tc_posix_memalign()
- // in tc_malloc.cc.
- if (((alignment % sizeof(void*)) != 0) ||
- ((alignment & (alignment - 1)) != 0) || (alignment == 0)) {
- return EINVAL;
- }
- void* ptr = ShimMemalign(alignment, size, nullptr);
- *res = ptr;
- return ptr ? 0 : ENOMEM;
-}
-
-ALWAYS_INLINE void* ShimValloc(size_t size, void* context) {
- return ShimMemalign(GetCachedPageSize(), size, context);
-}
-
-ALWAYS_INLINE void* ShimPvalloc(size_t size) {
- // pvalloc(0) should allocate one page, according to its man page.
- if (size == 0) {
- size = GetCachedPageSize();
- } else {
- size = (size + GetCachedPageSize() - 1) & ~(GetCachedPageSize() - 1);
- }
- // The third argument is nullptr because pvalloc is glibc only and does not
- // exist on OSX/BSD systems.
- return ShimMemalign(GetCachedPageSize(), size, nullptr);
-}
-
-ALWAYS_INLINE void ShimFree(void* address, void* context) {
- const allocator::AllocatorDispatch* const chain_head = GetChainHead();
- return chain_head->free_function(chain_head, address, context);
-}
-
-ALWAYS_INLINE size_t ShimGetSizeEstimate(const void* address, void* context) {
- const allocator::AllocatorDispatch* const chain_head = GetChainHead();
- return chain_head->get_size_estimate_function(
- chain_head, const_cast<void*>(address), context);
-}
-
-ALWAYS_INLINE unsigned ShimBatchMalloc(size_t size,
- void** results,
- unsigned num_requested,
- void* context) {
- const allocator::AllocatorDispatch* const chain_head = GetChainHead();
- return chain_head->batch_malloc_function(chain_head, size, results,
- num_requested, context);
-}
-
-ALWAYS_INLINE void ShimBatchFree(void** to_be_freed,
- unsigned num_to_be_freed,
- void* context) {
- const allocator::AllocatorDispatch* const chain_head = GetChainHead();
- return chain_head->batch_free_function(chain_head, to_be_freed,
- num_to_be_freed, context);
-}
-
-ALWAYS_INLINE void ShimFreeDefiniteSize(void* ptr, size_t size, void* context) {
- const allocator::AllocatorDispatch* const chain_head = GetChainHead();
- return chain_head->free_definite_size_function(chain_head, ptr, size,
- context);
-}
-
-} // extern "C"
-
-#if !defined(OS_WIN) && !defined(OS_MACOSX)
-// Cpp symbols (new / delete) should always be routed through the shim layer
-// except on Windows and macOS where the malloc intercept is deep enough that it
-// also catches the cpp calls.
-#include "base/allocator/allocator_shim_override_cpp_symbols.h"
-#endif
-
-#if defined(OS_ANDROID)
-// Android does not support symbol interposition. The way malloc symbols are
-// intercepted on Android is by using link-time -wrap flags.
-#include "base/allocator/allocator_shim_override_linker_wrapped_symbols.h"
-#elif defined(OS_WIN)
-// On Windows we use plain link-time overriding of the CRT symbols.
-#include "base/allocator/allocator_shim_override_ucrt_symbols_win.h"
-#elif defined(OS_MACOSX)
-#include "base/allocator/allocator_shim_default_dispatch_to_mac_zoned_malloc.h"
-#include "base/allocator/allocator_shim_override_mac_symbols.h"
-#else
-#include "base/allocator/allocator_shim_override_libc_symbols.h"
-#endif
-
-// In the case of tcmalloc we also want to plumb into the glibc hooks
-// to avoid that allocations made in glibc itself (e.g., strdup()) get
-// accidentally performed on the glibc heap instead of the tcmalloc one.
-#if defined(USE_TCMALLOC)
-#include "base/allocator/allocator_shim_override_glibc_weak_symbols.h"
-#endif
-
-#if defined(OS_MACOSX)
-namespace base {
-namespace allocator {
-void InitializeAllocatorShim() {
- // Prepares the default dispatch. After the intercepted malloc calls have
- // traversed the shim this will route them to the default malloc zone.
- InitializeDefaultDispatchToMacAllocator();
-
- MallocZoneFunctions functions = MallocZoneFunctionsToReplaceDefault();
-
- // This replaces the default malloc zone, causing calls to malloc & friends
- // from the codebase to be routed to ShimMalloc() above.
- base::allocator::ReplaceFunctionsForStoredZones(&functions);
-}
-} // namespace allocator
-} // namespace base
-#endif
-
-// Cross-checks.
-
-#if defined(MEMORY_TOOL_REPLACES_ALLOCATOR)
-#error The allocator shim should not be compiled when building for memory tools.
-#endif
-
-#if (defined(__GNUC__) && defined(__EXCEPTIONS)) || \
- (defined(_MSC_VER) && defined(_CPPUNWIND))
-#error This code cannot be used when exceptions are turned on.
-#endif
diff --git a/base/allocator/allocator_shim.h b/base/allocator/allocator_shim.h
deleted file mode 100644
index 6256f30..0000000
--- a/base/allocator/allocator_shim.h
+++ /dev/null
@@ -1,133 +0,0 @@
-// Copyright 2016 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#ifndef BASE_ALLOCATOR_ALLOCATOR_SHIM_H_
-#define BASE_ALLOCATOR_ALLOCATOR_SHIM_H_
-
-#include <stddef.h>
-
-#include "base/base_export.h"
-#include "build_config.h"
-
-namespace base {
-namespace allocator {
-
-// Allocator Shim API. Allows to:
-// - Configure the behavior of the allocator (what to do on OOM failures).
-// - Install new hooks (AllocatorDispatch) in the allocator chain.
-
-// When this shim layer is enabled, the route of an allocation is as-follows:
-//
-// [allocator_shim_override_*.h] Intercept malloc() / operator new calls:
-// The override_* headers define the symbols required to intercept calls to
-// malloc() and operator new (if not overridden by specific C++ classes).
-//
-// [allocator_shim.cc] Routing allocation calls to the shim:
-// The headers above route the calls to the internal ShimMalloc(), ShimFree(),
-// ShimCppNew() etc. methods defined in allocator_shim.cc.
-// These methods will: (1) forward the allocation call to the front of the
-// AllocatorDispatch chain. (2) perform security hardenings (e.g., might
-// call std::new_handler on OOM failure).
-//
-// [allocator_shim_default_dispatch_to_*.cc] The AllocatorDispatch chain:
-// It is a singly linked list where each element is a struct with function
-// pointers (|malloc_function|, |free_function|, etc). Normally the chain
-// consists of a single AllocatorDispatch element, herein called
-// the "default dispatch", which is statically defined at build time and
-// ultimately routes the calls to the actual allocator defined by the build
-// config (tcmalloc, glibc, ...).
-//
-// It is possible to dynamically insert further AllocatorDispatch stages
-// to the front of the chain, for debugging / profiling purposes.
-//
-// All the functions must be thred safe. The shim does not enforce any
-// serialization. This is to route to thread-aware allocators (e.g, tcmalloc)
-// wihout introducing unnecessary perf hits.
-
-struct AllocatorDispatch {
- using AllocFn = void*(const AllocatorDispatch* self,
- size_t size,
- void* context);
- using AllocZeroInitializedFn = void*(const AllocatorDispatch* self,
- size_t n,
- size_t size,
- void* context);
- using AllocAlignedFn = void*(const AllocatorDispatch* self,
- size_t alignment,
- size_t size,
- void* context);
- using ReallocFn = void*(const AllocatorDispatch* self,
- void* address,
- size_t size,
- void* context);
- using FreeFn = void(const AllocatorDispatch* self,
- void* address,
- void* context);
- // Returns the best available estimate for the actual amount of memory
- // consumed by the allocation |address|. If possible, this should include
- // heap overhead or at least a decent estimate of the full cost of the
- // allocation. If no good estimate is possible, returns zero.
- using GetSizeEstimateFn = size_t(const AllocatorDispatch* self,
- void* address,
- void* context);
- using BatchMallocFn = unsigned(const AllocatorDispatch* self,
- size_t size,
- void** results,
- unsigned num_requested,
- void* context);
- using BatchFreeFn = void(const AllocatorDispatch* self,
- void** to_be_freed,
- unsigned num_to_be_freed,
- void* context);
- using FreeDefiniteSizeFn = void(const AllocatorDispatch* self,
- void* ptr,
- size_t size,
- void* context);
-
- AllocFn* const alloc_function;
- AllocZeroInitializedFn* const alloc_zero_initialized_function;
- AllocAlignedFn* const alloc_aligned_function;
- ReallocFn* const realloc_function;
- FreeFn* const free_function;
- GetSizeEstimateFn* const get_size_estimate_function;
- BatchMallocFn* const batch_malloc_function;
- BatchFreeFn* const batch_free_function;
- FreeDefiniteSizeFn* const free_definite_size_function;
-
- const AllocatorDispatch* next;
-
- // |default_dispatch| is statically defined by one (and only one) of the
- // allocator_shim_default_dispatch_to_*.cc files, depending on the build
- // configuration.
- static const AllocatorDispatch default_dispatch;
-};
-
-// When true makes malloc behave like new, w.r.t calling the new_handler if
-// the allocation fails (see set_new_mode() in Windows).
-BASE_EXPORT void SetCallNewHandlerOnMallocFailure(bool value);
-
-// Allocates |size| bytes or returns nullptr. It does NOT call the new_handler,
-// regardless of SetCallNewHandlerOnMallocFailure().
-BASE_EXPORT void* UncheckedAlloc(size_t size);
-
-// Inserts |dispatch| in front of the allocator chain. This method is
-// thread-safe w.r.t concurrent invocations of InsertAllocatorDispatch().
-// The callers have responsibility for inserting a single dispatch no more
-// than once.
-BASE_EXPORT void InsertAllocatorDispatch(AllocatorDispatch* dispatch);
-
-// Test-only. Rationale: (1) lack of use cases; (2) dealing safely with a
-// removal of arbitrary elements from a singly linked list would require a lock
-// in malloc(), which we really don't want.
-BASE_EXPORT void RemoveAllocatorDispatchForTesting(AllocatorDispatch* dispatch);
-
-#if defined(OS_MACOSX)
-// On macOS, the allocator shim needs to be turned on during runtime.
-BASE_EXPORT void InitializeAllocatorShim();
-#endif // defined(OS_MACOSX)
-
-} // namespace allocator
-} // namespace base
-
-#endif // BASE_ALLOCATOR_ALLOCATOR_SHIM_H_
diff --git a/base/allocator/allocator_shim_default_dispatch_to_glibc.cc b/base/allocator/allocator_shim_default_dispatch_to_glibc.cc
deleted file mode 100644
index 8574da3..0000000
--- a/base/allocator/allocator_shim_default_dispatch_to_glibc.cc
+++ /dev/null
@@ -1,75 +0,0 @@
-// Copyright 2016 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#include "base/allocator/allocator_shim.h"
-
-#include <malloc.h>
-
-// This translation unit defines a default dispatch for the allocator shim which
-// routes allocations to libc functions.
-// The code here is strongly inspired from tcmalloc's libc_override_glibc.h.
-
-extern "C" {
-void* __libc_malloc(size_t size);
-void* __libc_calloc(size_t n, size_t size);
-void* __libc_realloc(void* address, size_t size);
-void* __libc_memalign(size_t alignment, size_t size);
-void __libc_free(void* ptr);
-} // extern "C"
-
-namespace {
-
-using base::allocator::AllocatorDispatch;
-
-void* GlibcMalloc(const AllocatorDispatch*, size_t size, void* context) {
- return __libc_malloc(size);
-}
-
-void* GlibcCalloc(const AllocatorDispatch*,
- size_t n,
- size_t size,
- void* context) {
- return __libc_calloc(n, size);
-}
-
-void* GlibcRealloc(const AllocatorDispatch*,
- void* address,
- size_t size,
- void* context) {
- return __libc_realloc(address, size);
-}
-
-void* GlibcMemalign(const AllocatorDispatch*,
- size_t alignment,
- size_t size,
- void* context) {
- return __libc_memalign(alignment, size);
-}
-
-void GlibcFree(const AllocatorDispatch*, void* address, void* context) {
- __libc_free(address);
-}
-
-size_t GlibcGetSizeEstimate(const AllocatorDispatch*,
- void* address,
- void* context) {
- // TODO(siggi, primiano): malloc_usable_size may need redirection in the
- // presence of interposing shims that divert allocations.
- return malloc_usable_size(address);
-}
-
-} // namespace
-
-const AllocatorDispatch AllocatorDispatch::default_dispatch = {
- &GlibcMalloc, /* alloc_function */
- &GlibcCalloc, /* alloc_zero_initialized_function */
- &GlibcMemalign, /* alloc_aligned_function */
- &GlibcRealloc, /* realloc_function */
- &GlibcFree, /* free_function */
- &GlibcGetSizeEstimate, /* get_size_estimate_function */
- nullptr, /* batch_malloc_function */
- nullptr, /* batch_free_function */
- nullptr, /* free_definite_size_function */
- nullptr, /* next */
-};
diff --git a/base/allocator/allocator_shim_default_dispatch_to_linker_wrapped_symbols.cc b/base/allocator/allocator_shim_default_dispatch_to_linker_wrapped_symbols.cc
deleted file mode 100644
index 89cabc4..0000000
--- a/base/allocator/allocator_shim_default_dispatch_to_linker_wrapped_symbols.cc
+++ /dev/null
@@ -1,113 +0,0 @@
-// Copyright 2016 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#include <malloc.h>
-
-#include "base/allocator/allocator_shim.h"
-#include "build_config.h"
-
-#if defined(OS_ANDROID) && __ANDROID_API__ < 17
-#include <dlfcn.h>
-// This is defined in malloc.h on other platforms. We just need the definition
-// for the decltype(malloc_usable_size)* call to work.
-size_t malloc_usable_size(const void*);
-#endif
-
-// This translation unit defines a default dispatch for the allocator shim which
-// routes allocations to the original libc functions when using the link-time
-// -Wl,-wrap,malloc approach (see README.md).
-// The __real_X functions here are special symbols that the linker will relocate
-// against the real "X" undefined symbol, so that __real_malloc becomes the
-// equivalent of what an undefined malloc symbol reference would have been.
-// This is the counterpart of allocator_shim_override_linker_wrapped_symbols.h,
-// which routes the __wrap_X functions into the shim.
-
-extern "C" {
-void* __real_malloc(size_t);
-void* __real_calloc(size_t, size_t);
-void* __real_realloc(void*, size_t);
-void* __real_memalign(size_t, size_t);
-void* __real_free(void*);
-} // extern "C"
-
-namespace {
-
-using base::allocator::AllocatorDispatch;
-
-void* RealMalloc(const AllocatorDispatch*, size_t size, void* context) {
- return __real_malloc(size);
-}
-
-void* RealCalloc(const AllocatorDispatch*,
- size_t n,
- size_t size,
- void* context) {
- return __real_calloc(n, size);
-}
-
-void* RealRealloc(const AllocatorDispatch*,
- void* address,
- size_t size,
- void* context) {
- return __real_realloc(address, size);
-}
-
-void* RealMemalign(const AllocatorDispatch*,
- size_t alignment,
- size_t size,
- void* context) {
- return __real_memalign(alignment, size);
-}
-
-void RealFree(const AllocatorDispatch*, void* address, void* context) {
- __real_free(address);
-}
-
-#if defined(OS_ANDROID) && __ANDROID_API__ < 17
-size_t DummyMallocUsableSize(const void*) { return 0; }
-#endif
-
-size_t RealSizeEstimate(const AllocatorDispatch*,
- void* address,
- void* context) {
-#if defined(OS_ANDROID)
-#if __ANDROID_API__ < 17
- // malloc_usable_size() is available only starting from API 17.
- // TODO(dskiba): remove once we start building against 17+.
- using MallocUsableSizeFunction = decltype(malloc_usable_size)*;
- static MallocUsableSizeFunction usable_size_function = nullptr;
- if (!usable_size_function) {
- void* function_ptr = dlsym(RTLD_DEFAULT, "malloc_usable_size");
- if (function_ptr) {
- usable_size_function = reinterpret_cast<MallocUsableSizeFunction>(
- function_ptr);
- } else {
- usable_size_function = &DummyMallocUsableSize;
- }
- }
- return usable_size_function(address);
-#else
- return malloc_usable_size(address);
-#endif
-#endif // OS_ANDROID
-
- // TODO(primiano): This should be redirected to malloc_usable_size or
- // the like.
- return 0;
-}
-
-} // namespace
-
-const AllocatorDispatch AllocatorDispatch::default_dispatch = {
- &RealMalloc, /* alloc_function */
- &RealCalloc, /* alloc_zero_initialized_function */
- &RealMemalign, /* alloc_aligned_function */
- &RealRealloc, /* realloc_function */
- &RealFree, /* free_function */
- &RealSizeEstimate, /* get_size_estimate_function */
- nullptr, /* batch_malloc_function */
- nullptr, /* batch_free_function */
- nullptr, /* free_definite_size_function */
- nullptr, /* next */
-};
diff --git a/base/allocator/allocator_shim_default_dispatch_to_mac_zoned_malloc.cc b/base/allocator/allocator_shim_default_dispatch_to_mac_zoned_malloc.cc
deleted file mode 100644
index 32898ef..0000000
--- a/base/allocator/allocator_shim_default_dispatch_to_mac_zoned_malloc.cc
+++ /dev/null
@@ -1,109 +0,0 @@
-// Copyright 2017 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#include "base/allocator/allocator_shim_default_dispatch_to_mac_zoned_malloc.h"
-
-#include <utility>
-
-#include "base/allocator/allocator_interception_mac.h"
-#include "base/allocator/allocator_shim.h"
-#include "base/allocator/malloc_zone_functions_mac.h"
-
-namespace base {
-namespace allocator {
-namespace {
-
-void* MallocImpl(const AllocatorDispatch*, size_t size, void* context) {
- MallocZoneFunctions& functions = GetFunctionsForZone(context);
- return functions.malloc(reinterpret_cast<struct _malloc_zone_t*>(context),
- size);
-}
-
-void* CallocImpl(const AllocatorDispatch*,
- size_t n,
- size_t size,
- void* context) {
- MallocZoneFunctions& functions = GetFunctionsForZone(context);
- return functions.calloc(reinterpret_cast<struct _malloc_zone_t*>(context), n,
- size);
-}
-
-void* MemalignImpl(const AllocatorDispatch*,
- size_t alignment,
- size_t size,
- void* context) {
- MallocZoneFunctions& functions = GetFunctionsForZone(context);
- return functions.memalign(reinterpret_cast<struct _malloc_zone_t*>(context),
- alignment, size);
-}
-
-void* ReallocImpl(const AllocatorDispatch*,
- void* ptr,
- size_t size,
- void* context) {
- MallocZoneFunctions& functions = GetFunctionsForZone(context);
- return functions.realloc(reinterpret_cast<struct _malloc_zone_t*>(context),
- ptr, size);
-}
-
-void FreeImpl(const AllocatorDispatch*, void* ptr, void* context) {
- MallocZoneFunctions& functions = GetFunctionsForZone(context);
- functions.free(reinterpret_cast<struct _malloc_zone_t*>(context), ptr);
-}
-
-size_t GetSizeEstimateImpl(const AllocatorDispatch*, void* ptr, void* context) {
- MallocZoneFunctions& functions = GetFunctionsForZone(context);
- return functions.size(reinterpret_cast<struct _malloc_zone_t*>(context), ptr);
-}
-
-unsigned BatchMallocImpl(const AllocatorDispatch* self,
- size_t size,
- void** results,
- unsigned num_requested,
- void* context) {
- MallocZoneFunctions& functions = GetFunctionsForZone(context);
- return functions.batch_malloc(
- reinterpret_cast<struct _malloc_zone_t*>(context), size, results,
- num_requested);
-}
-
-void BatchFreeImpl(const AllocatorDispatch* self,
- void** to_be_freed,
- unsigned num_to_be_freed,
- void* context) {
- MallocZoneFunctions& functions = GetFunctionsForZone(context);
- functions.batch_free(reinterpret_cast<struct _malloc_zone_t*>(context),
- to_be_freed, num_to_be_freed);
-}
-
-void FreeDefiniteSizeImpl(const AllocatorDispatch* self,
- void* ptr,
- size_t size,
- void* context) {
- MallocZoneFunctions& functions = GetFunctionsForZone(context);
- functions.free_definite_size(
- reinterpret_cast<struct _malloc_zone_t*>(context), ptr, size);
-}
-
-} // namespace
-
-void InitializeDefaultDispatchToMacAllocator() {
- StoreFunctionsForAllZones();
-}
-
-const AllocatorDispatch AllocatorDispatch::default_dispatch = {
- &MallocImpl, /* alloc_function */
- &CallocImpl, /* alloc_zero_initialized_function */
- &MemalignImpl, /* alloc_aligned_function */
- &ReallocImpl, /* realloc_function */
- &FreeImpl, /* free_function */
- &GetSizeEstimateImpl, /* get_size_estimate_function */
- &BatchMallocImpl, /* batch_malloc_function */
- &BatchFreeImpl, /* batch_free_function */
- &FreeDefiniteSizeImpl, /* free_definite_size_function */
- nullptr, /* next */
-};
-
-} // namespace allocator
-} // namespace base
diff --git a/base/allocator/allocator_shim_default_dispatch_to_mac_zoned_malloc.h b/base/allocator/allocator_shim_default_dispatch_to_mac_zoned_malloc.h
deleted file mode 100644
index 77d533c..0000000
--- a/base/allocator/allocator_shim_default_dispatch_to_mac_zoned_malloc.h
+++ /dev/null
@@ -1,19 +0,0 @@
-// Copyright 2017 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#ifndef BASE_ALLOCATOR_ALLOCATOR_SHIM_DEFAULT_DISPATCH_TO_ZONED_MALLOC_H_
-#define BASE_ALLOCATOR_ALLOCATOR_SHIM_DEFAULT_DISPATCH_TO_ZONED_MALLOC_H_
-
-namespace base {
-namespace allocator {
-
-// This initializes AllocatorDispatch::default_dispatch by saving pointers to
-// the functions in the current default malloc zone. This must be called before
-// the default malloc zone is changed to have its intended effect.
-void InitializeDefaultDispatchToMacAllocator();
-
-} // namespace allocator
-} // namespace base
-
-#endif // BASE_ALLOCATOR_ALLOCATOR_SHIM_DEFAULT_DISPATCH_TO_ZONED_MALLOC_H_
diff --git a/base/allocator/allocator_shim_default_dispatch_to_tcmalloc.cc b/base/allocator/allocator_shim_default_dispatch_to_tcmalloc.cc
deleted file mode 100644
index 878e8a7..0000000
--- a/base/allocator/allocator_shim_default_dispatch_to_tcmalloc.cc
+++ /dev/null
@@ -1,92 +0,0 @@
-// Copyright 2016 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#include "base/allocator/allocator_shim.h"
-#include "base/allocator/allocator_shim_internals.h"
-#include "third_party/tcmalloc/chromium/src/config.h"
-#include "third_party/tcmalloc/chromium/src/gperftools/tcmalloc.h"
-
-namespace {
-
-using base::allocator::AllocatorDispatch;
-
-void* TCMalloc(const AllocatorDispatch*, size_t size, void* context) {
- return tc_malloc(size);
-}
-
-void* TCCalloc(const AllocatorDispatch*, size_t n, size_t size, void* context) {
- return tc_calloc(n, size);
-}
-
-void* TCMemalign(const AllocatorDispatch*,
- size_t alignment,
- size_t size,
- void* context) {
- return tc_memalign(alignment, size);
-}
-
-void* TCRealloc(const AllocatorDispatch*,
- void* address,
- size_t size,
- void* context) {
- return tc_realloc(address, size);
-}
-
-void TCFree(const AllocatorDispatch*, void* address, void* context) {
- tc_free(address);
-}
-
-size_t TCGetSizeEstimate(const AllocatorDispatch*,
- void* address,
- void* context) {
- return tc_malloc_size(address);
-}
-
-} // namespace
-
-const AllocatorDispatch AllocatorDispatch::default_dispatch = {
- &TCMalloc, /* alloc_function */
- &TCCalloc, /* alloc_zero_initialized_function */
- &TCMemalign, /* alloc_aligned_function */
- &TCRealloc, /* realloc_function */
- &TCFree, /* free_function */
- &TCGetSizeEstimate, /* get_size_estimate_function */
- nullptr, /* batch_malloc_function */
- nullptr, /* batch_free_function */
- nullptr, /* free_definite_size_function */
- nullptr, /* next */
-};
-
-// In the case of tcmalloc we have also to route the diagnostic symbols,
-// which are not part of the unified shim layer, to tcmalloc for consistency.
-
-extern "C" {
-
-SHIM_ALWAYS_EXPORT void malloc_stats(void) __THROW {
- return tc_malloc_stats();
-}
-
-SHIM_ALWAYS_EXPORT int mallopt(int cmd, int value) __THROW {
- return tc_mallopt(cmd, value);
-}
-
-#ifdef HAVE_STRUCT_MALLINFO
-SHIM_ALWAYS_EXPORT struct mallinfo mallinfo(void) __THROW {
- return tc_mallinfo();
-}
-#endif
-
-SHIM_ALWAYS_EXPORT size_t malloc_size(void* address) __THROW {
- return tc_malloc_size(address);
-}
-
-#if defined(__ANDROID__)
-SHIM_ALWAYS_EXPORT size_t malloc_usable_size(const void* address) __THROW {
-#else
-SHIM_ALWAYS_EXPORT size_t malloc_usable_size(void* address) __THROW {
-#endif
- return tc_malloc_size(address);
-}
-
-} // extern "C"
diff --git a/base/allocator/allocator_shim_default_dispatch_to_winheap.cc b/base/allocator/allocator_shim_default_dispatch_to_winheap.cc
deleted file mode 100644
index 6aba5a3..0000000
--- a/base/allocator/allocator_shim_default_dispatch_to_winheap.cc
+++ /dev/null
@@ -1,79 +0,0 @@
-// Copyright 2016 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#include "base/allocator/allocator_shim.h"
-
-#include "base/allocator/winheap_stubs_win.h"
-#include "base/logging.h"
-
-namespace {
-
-using base::allocator::AllocatorDispatch;
-
-void* DefaultWinHeapMallocImpl(const AllocatorDispatch*,
- size_t size,
- void* context) {
- return base::allocator::WinHeapMalloc(size);
-}
-
-void* DefaultWinHeapCallocImpl(const AllocatorDispatch* self,
- size_t n,
- size_t elem_size,
- void* context) {
- // Overflow check.
- const size_t size = n * elem_size;
- if (elem_size != 0 && size / elem_size != n)
- return nullptr;
-
- void* result = DefaultWinHeapMallocImpl(self, size, context);
- if (result) {
- memset(result, 0, size);
- }
- return result;
-}
-
-void* DefaultWinHeapMemalignImpl(const AllocatorDispatch* self,
- size_t alignment,
- size_t size,
- void* context) {
- CHECK(false) << "The windows heap does not support memalign.";
- return nullptr;
-}
-
-void* DefaultWinHeapReallocImpl(const AllocatorDispatch* self,
- void* address,
- size_t size,
- void* context) {
- return base::allocator::WinHeapRealloc(address, size);
-}
-
-void DefaultWinHeapFreeImpl(const AllocatorDispatch*,
- void* address,
- void* context) {
- base::allocator::WinHeapFree(address);
-}
-
-size_t DefaultWinHeapGetSizeEstimateImpl(const AllocatorDispatch*,
- void* address,
- void* context) {
- return base::allocator::WinHeapGetSizeEstimate(address);
-}
-
-} // namespace
-
-// Guarantee that default_dispatch is compile-time initialized to avoid using
-// it before initialization (allocations before main in release builds with
-// optimizations disabled).
-constexpr AllocatorDispatch AllocatorDispatch::default_dispatch = {
- &DefaultWinHeapMallocImpl,
- &DefaultWinHeapCallocImpl,
- &DefaultWinHeapMemalignImpl,
- &DefaultWinHeapReallocImpl,
- &DefaultWinHeapFreeImpl,
- &DefaultWinHeapGetSizeEstimateImpl,
- nullptr, /* batch_malloc_function */
- nullptr, /* batch_free_function */
- nullptr, /* free_definite_size_function */
- nullptr, /* next */
-};
diff --git a/base/allocator/allocator_shim_internals.h b/base/allocator/allocator_shim_internals.h
deleted file mode 100644
index 0196f89..0000000
--- a/base/allocator/allocator_shim_internals.h
+++ /dev/null
@@ -1,44 +0,0 @@
-// Copyright 2016 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#ifndef BASE_ALLOCATOR_ALLOCATOR_SHIM_INTERNALS_H_
-#define BASE_ALLOCATOR_ALLOCATOR_SHIM_INTERNALS_H_
-
-#if defined(__GNUC__)
-
-#include <sys/cdefs.h> // for __THROW
-
-#ifndef __THROW // Not a glibc system
-#ifdef _NOEXCEPT // LLVM libc++ uses noexcept instead
-#define __THROW _NOEXCEPT
-#else
-#define __THROW
-#endif // !_NOEXCEPT
-#endif
-
-// Shim layer symbols need to be ALWAYS exported, regardless of component build.
-//
-// If an exported symbol is linked into a DSO, it may be preempted by a
-// definition in the main executable. If this happens to an allocator symbol, it
-// will mean that the DSO will use the main executable's allocator. This is
-// normally relatively harmless -- regular allocations should all use the same
-// allocator, but if the DSO tries to hook the allocator it will not see any
-// allocations.
-//
-// However, if LLVM LTO is enabled, the compiler may inline the shim layer
-// symbols into callers. The end result is that allocator calls in DSOs may use
-// either the main executable's allocator or the DSO's allocator, depending on
-// whether the call was inlined. This is arguably a bug in LLVM caused by its
-// somewhat irregular handling of symbol interposition (see llvm.org/PR23501).
-// To work around the bug we use noinline to prevent the symbols from being
-// inlined.
-//
-// In the long run we probably want to avoid linking the allocator bits into
-// DSOs altogether. This will save a little space and stop giving DSOs the false
-// impression that they can hook the allocator.
-#define SHIM_ALWAYS_EXPORT __attribute__((visibility("default"), noinline))
-
-#endif // __GNUC__
-
-#endif // BASE_ALLOCATOR_ALLOCATOR_SHIM_INTERNALS_H_
diff --git a/base/allocator/allocator_shim_override_cpp_symbols.h b/base/allocator/allocator_shim_override_cpp_symbols.h
deleted file mode 100644
index 3313687..0000000
--- a/base/allocator/allocator_shim_override_cpp_symbols.h
+++ /dev/null
@@ -1,51 +0,0 @@
-// Copyright 2016 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#ifdef BASE_ALLOCATOR_ALLOCATOR_SHIM_OVERRIDE_CPP_SYMBOLS_H_
-#error This header is meant to be included only once by allocator_shim.cc
-#endif
-#define BASE_ALLOCATOR_ALLOCATOR_SHIM_OVERRIDE_CPP_SYMBOLS_H_
-
-// Preempt the default new/delete C++ symbols so they call the shim entry
-// points. This file is strongly inspired by tcmalloc's
-// libc_override_redefine.h.
-
-#include <new>
-
-#include "base/allocator/allocator_shim_internals.h"
-
-SHIM_ALWAYS_EXPORT void* operator new(size_t size) {
- return ShimCppNew(size);
-}
-
-SHIM_ALWAYS_EXPORT void operator delete(void* p) __THROW {
- ShimCppDelete(p);
-}
-
-SHIM_ALWAYS_EXPORT void* operator new[](size_t size) {
- return ShimCppNew(size);
-}
-
-SHIM_ALWAYS_EXPORT void operator delete[](void* p) __THROW {
- ShimCppDelete(p);
-}
-
-SHIM_ALWAYS_EXPORT void* operator new(size_t size,
- const std::nothrow_t&) __THROW {
- return ShimCppNew(size);
-}
-
-SHIM_ALWAYS_EXPORT void* operator new[](size_t size,
- const std::nothrow_t&) __THROW {
- return ShimCppNew(size);
-}
-
-SHIM_ALWAYS_EXPORT void operator delete(void* p, const std::nothrow_t&) __THROW {
- ShimCppDelete(p);
-}
-
-SHIM_ALWAYS_EXPORT void operator delete[](void* p,
- const std::nothrow_t&) __THROW {
- ShimCppDelete(p);
-}
diff --git a/base/allocator/allocator_shim_override_glibc_weak_symbols.h b/base/allocator/allocator_shim_override_glibc_weak_symbols.h
deleted file mode 100644
index 9142bda..0000000
--- a/base/allocator/allocator_shim_override_glibc_weak_symbols.h
+++ /dev/null
@@ -1,119 +0,0 @@
-// Copyright 2016 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#ifdef BASE_ALLOCATOR_ALLOCATOR_SHIM_OVERRIDE_GLIBC_WEAK_SYMBOLS_H_
-#error This header is meant to be included only once by allocator_shim.cc
-#endif
-#define BASE_ALLOCATOR_ALLOCATOR_SHIM_OVERRIDE_GLIBC_WEAK_SYMBOLS_H_
-
-// Alias the internal Glibc symbols to the shim entry points.
-// This file is strongly inspired by tcmalloc's libc_override_glibc.h.
-// Effectively this file does two things:
-// 1) Re-define the __malloc_hook & co symbols. Those symbols are defined as
-// weak in glibc and are meant to be defined strongly by client processes
-// to hook calls initiated from within glibc.
-// 2) Re-define Glibc-specific symbols (__libc_malloc). The historical reason
-// is that in the past (in RedHat 9) we had instances of libraries that were
-// allocating via malloc() and freeing using __libc_free().
-// See tcmalloc's libc_override_glibc.h for more context.
-
-#include <features.h> // for __GLIBC__
-#include <malloc.h>
-#include <unistd.h>
-
-#include <new>
-
-#include "base/allocator/allocator_shim_internals.h"
-
-// __MALLOC_HOOK_VOLATILE not defined in all Glibc headers.
-#if !defined(__MALLOC_HOOK_VOLATILE)
-#define MALLOC_HOOK_MAYBE_VOLATILE /**/
-#else
-#define MALLOC_HOOK_MAYBE_VOLATILE __MALLOC_HOOK_VOLATILE
-#endif
-
-extern "C" {
-
-// 1) Re-define malloc_hook weak symbols.
-namespace {
-
-void* GlibcMallocHook(size_t size, const void* caller) {
- return ShimMalloc(size, nullptr);
-}
-
-void* GlibcReallocHook(void* ptr, size_t size, const void* caller) {
- return ShimRealloc(ptr, size, nullptr);
-}
-
-void GlibcFreeHook(void* ptr, const void* caller) {
- return ShimFree(ptr, nullptr);
-}
-
-void* GlibcMemalignHook(size_t align, size_t size, const void* caller) {
- return ShimMemalign(align, size, nullptr);
-}
-
-} // namespace
-
-__attribute__((visibility("default"))) void* (
- *MALLOC_HOOK_MAYBE_VOLATILE __malloc_hook)(size_t,
- const void*) = &GlibcMallocHook;
-
-__attribute__((visibility("default"))) void* (
- *MALLOC_HOOK_MAYBE_VOLATILE __realloc_hook)(void*, size_t, const void*) =
- &GlibcReallocHook;
-
-__attribute__((visibility("default"))) void (
- *MALLOC_HOOK_MAYBE_VOLATILE __free_hook)(void*,
- const void*) = &GlibcFreeHook;
-
-__attribute__((visibility("default"))) void* (
- *MALLOC_HOOK_MAYBE_VOLATILE __memalign_hook)(size_t, size_t, const void*) =
- &GlibcMemalignHook;
-
-// 2) Redefine libc symbols themselves.
-
-SHIM_ALWAYS_EXPORT void* __libc_malloc(size_t size) {
- return ShimMalloc(size, nullptr);
-}
-
-SHIM_ALWAYS_EXPORT void __libc_free(void* ptr) {
- ShimFree(ptr, nullptr);
-}
-
-SHIM_ALWAYS_EXPORT void* __libc_realloc(void* ptr, size_t size) {
- return ShimRealloc(ptr, size, nullptr);
-}
-
-SHIM_ALWAYS_EXPORT void* __libc_calloc(size_t n, size_t size) {
- return ShimCalloc(n, size, nullptr);
-}
-
-SHIM_ALWAYS_EXPORT void __libc_cfree(void* ptr) {
- return ShimFree(ptr, nullptr);
-}
-
-SHIM_ALWAYS_EXPORT void* __libc_memalign(size_t align, size_t s) {
- return ShimMemalign(align, s, nullptr);
-}
-
-SHIM_ALWAYS_EXPORT void* __libc_valloc(size_t size) {
- return ShimValloc(size, nullptr);
-}
-
-SHIM_ALWAYS_EXPORT void* __libc_pvalloc(size_t size) {
- return ShimPvalloc(size);
-}
-
-SHIM_ALWAYS_EXPORT int __posix_memalign(void** r, size_t a, size_t s) {
- return ShimPosixMemalign(r, a, s);
-}
-
-} // extern "C"
-
-// Safety check.
-#if !defined(__GLIBC__)
-#error The target platform does not seem to use Glibc. Disable the allocator \
-shim by setting use_allocator_shim=false in GN args.
-#endif
diff --git a/base/allocator/allocator_shim_override_libc_symbols.h b/base/allocator/allocator_shim_override_libc_symbols.h
deleted file mode 100644
index b77cbb1..0000000
--- a/base/allocator/allocator_shim_override_libc_symbols.h
+++ /dev/null
@@ -1,63 +0,0 @@
-// Copyright 2016 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-// Its purpose is to preempt the Libc symbols for malloc/new so they call the
-// shim layer entry points.
-
-#ifdef BASE_ALLOCATOR_ALLOCATOR_SHIM_OVERRIDE_LIBC_SYMBOLS_H_
-#error This header is meant to be included only once by allocator_shim.cc
-#endif
-#define BASE_ALLOCATOR_ALLOCATOR_SHIM_OVERRIDE_LIBC_SYMBOLS_H_
-
-#include <malloc.h>
-
-#include "base/allocator/allocator_shim_internals.h"
-
-extern "C" {
-
-SHIM_ALWAYS_EXPORT void* malloc(size_t size) __THROW {
- return ShimMalloc(size, nullptr);
-}
-
-SHIM_ALWAYS_EXPORT void free(void* ptr) __THROW {
- ShimFree(ptr, nullptr);
-}
-
-SHIM_ALWAYS_EXPORT void* realloc(void* ptr, size_t size) __THROW {
- return ShimRealloc(ptr, size, nullptr);
-}
-
-SHIM_ALWAYS_EXPORT void* calloc(size_t n, size_t size) __THROW {
- return ShimCalloc(n, size, nullptr);
-}
-
-SHIM_ALWAYS_EXPORT void cfree(void* ptr) __THROW {
- ShimFree(ptr, nullptr);
-}
-
-SHIM_ALWAYS_EXPORT void* memalign(size_t align, size_t s) __THROW {
- return ShimMemalign(align, s, nullptr);
-}
-
-SHIM_ALWAYS_EXPORT void* valloc(size_t size) __THROW {
- return ShimValloc(size, nullptr);
-}
-
-SHIM_ALWAYS_EXPORT void* pvalloc(size_t size) __THROW {
- return ShimPvalloc(size);
-}
-
-SHIM_ALWAYS_EXPORT int posix_memalign(void** r, size_t a, size_t s) __THROW {
- return ShimPosixMemalign(r, a, s);
-}
-
-// The default dispatch translation unit has to define also the following
-// symbols (unless they are ultimately routed to the system symbols):
-// void malloc_stats(void);
-// int mallopt(int, int);
-// struct mallinfo mallinfo(void);
-// size_t malloc_size(void*);
-// size_t malloc_usable_size(const void*);
-
-} // extern "C"
diff --git a/base/allocator/allocator_shim_override_linker_wrapped_symbols.h b/base/allocator/allocator_shim_override_linker_wrapped_symbols.h
deleted file mode 100644
index 6bf73c3..0000000
--- a/base/allocator/allocator_shim_override_linker_wrapped_symbols.h
+++ /dev/null
@@ -1,54 +0,0 @@
-// Copyright 2016 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#ifdef BASE_ALLOCATOR_ALLOCATOR_SHIM_OVERRIDE_LINKER_WRAPPED_SYMBOLS_H_
-#error This header is meant to be included only once by allocator_shim.cc
-#endif
-#define BASE_ALLOCATOR_ALLOCATOR_SHIM_OVERRIDE_LINKER_WRAPPED_SYMBOLS_H_
-
-// This header overrides the __wrap_X symbols when using the link-time
-// -Wl,-wrap,malloc shim-layer approach (see README.md).
-// All references to malloc, free, etc. within the linker unit that gets the
-// -wrap linker flags (e.g., libchrome.so) will be rewritten to the
-// linker as references to __wrap_malloc, __wrap_free, which are defined here.
-
-#include "base/allocator/allocator_shim_internals.h"
-
-extern "C" {
-
-SHIM_ALWAYS_EXPORT void* __wrap_calloc(size_t n, size_t size) {
- return ShimCalloc(n, size, nullptr);
-}
-
-SHIM_ALWAYS_EXPORT void __wrap_free(void* ptr) {
- ShimFree(ptr, nullptr);
-}
-
-SHIM_ALWAYS_EXPORT void* __wrap_malloc(size_t size) {
- return ShimMalloc(size, nullptr);
-}
-
-SHIM_ALWAYS_EXPORT void* __wrap_memalign(size_t align, size_t size) {
- return ShimMemalign(align, size, nullptr);
-}
-
-SHIM_ALWAYS_EXPORT int __wrap_posix_memalign(void** res,
- size_t align,
- size_t size) {
- return ShimPosixMemalign(res, align, size);
-}
-
-SHIM_ALWAYS_EXPORT void* __wrap_pvalloc(size_t size) {
- return ShimPvalloc(size);
-}
-
-SHIM_ALWAYS_EXPORT void* __wrap_realloc(void* address, size_t size) {
- return ShimRealloc(address, size, nullptr);
-}
-
-SHIM_ALWAYS_EXPORT void* __wrap_valloc(size_t size) {
- return ShimValloc(size, nullptr);
-}
-
-} // extern "C"
diff --git a/base/allocator/allocator_shim_override_mac_symbols.h b/base/allocator/allocator_shim_override_mac_symbols.h
deleted file mode 100644
index 0b65edb..0000000
--- a/base/allocator/allocator_shim_override_mac_symbols.h
+++ /dev/null
@@ -1,60 +0,0 @@
-// Copyright 2017 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#ifdef BASE_ALLOCATOR_ALLOCATOR_SHIM_OVERRIDE_MAC_SYMBOLS_H_
-#error This header is meant to be included only once by allocator_shim.cc
-#endif
-#define BASE_ALLOCATOR_ALLOCATOR_SHIM_OVERRIDE_MAC_SYMBOLS_H_
-
-#include "base/allocator/malloc_zone_functions_mac.h"
-#include "third_party/apple_apsl/malloc.h"
-
-namespace base {
-namespace allocator {
-
-MallocZoneFunctions MallocZoneFunctionsToReplaceDefault() {
- MallocZoneFunctions new_functions;
- memset(&new_functions, 0, sizeof(MallocZoneFunctions));
- new_functions.size = [](malloc_zone_t* zone, const void* ptr) -> size_t {
- return ShimGetSizeEstimate(ptr, zone);
- };
- new_functions.malloc = [](malloc_zone_t* zone, size_t size) -> void* {
- return ShimMalloc(size, zone);
- };
- new_functions.calloc = [](malloc_zone_t* zone, size_t n,
- size_t size) -> void* {
- return ShimCalloc(n, size, zone);
- };
- new_functions.valloc = [](malloc_zone_t* zone, size_t size) -> void* {
- return ShimValloc(size, zone);
- };
- new_functions.free = [](malloc_zone_t* zone, void* ptr) {
- ShimFree(ptr, zone);
- };
- new_functions.realloc = [](malloc_zone_t* zone, void* ptr,
- size_t size) -> void* {
- return ShimRealloc(ptr, size, zone);
- };
- new_functions.batch_malloc = [](struct _malloc_zone_t* zone, size_t size,
- void** results,
- unsigned num_requested) -> unsigned {
- return ShimBatchMalloc(size, results, num_requested, zone);
- };
- new_functions.batch_free = [](struct _malloc_zone_t* zone, void** to_be_freed,
- unsigned num_to_be_freed) -> void {
- ShimBatchFree(to_be_freed, num_to_be_freed, zone);
- };
- new_functions.memalign = [](malloc_zone_t* zone, size_t alignment,
- size_t size) -> void* {
- return ShimMemalign(alignment, size, zone);
- };
- new_functions.free_definite_size = [](malloc_zone_t* zone, void* ptr,
- size_t size) {
- ShimFreeDefiniteSize(ptr, size, zone);
- };
- return new_functions;
-}
-
-} // namespace allocator
-} // namespace base
diff --git a/base/allocator/allocator_shim_override_ucrt_symbols_win.h b/base/allocator/allocator_shim_override_ucrt_symbols_win.h
deleted file mode 100644
index ed02656..0000000
--- a/base/allocator/allocator_shim_override_ucrt_symbols_win.h
+++ /dev/null
@@ -1,100 +0,0 @@
-// Copyright 2016 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-// This header defines symbols to override the same functions in the Visual C++
-// CRT implementation.
-
-#ifdef BASE_ALLOCATOR_ALLOCATOR_SHIM_OVERRIDE_UCRT_SYMBOLS_WIN_H_
-#error This header is meant to be included only once by allocator_shim.cc
-#endif
-#define BASE_ALLOCATOR_ALLOCATOR_SHIM_OVERRIDE_UCRT_SYMBOLS_WIN_H_
-
-#include <malloc.h>
-
-#include <windows.h>
-
-extern "C" {
-
-void* (*malloc_unchecked)(size_t) = &base::allocator::UncheckedAlloc;
-
-namespace {
-
-int win_new_mode = 0;
-
-} // namespace
-
-// This function behaves similarly to MSVC's _set_new_mode.
-// If flag is 0 (default), calls to malloc will behave normally.
-// If flag is 1, calls to malloc will behave like calls to new,
-// and the std_new_handler will be invoked on failure.
-// Returns the previous mode.
-//
-// Replaces _set_new_mode in ucrt\heap\new_mode.cpp
-int _set_new_mode(int flag) {
- // The MS CRT calls this function early on in startup, so this serves as a low
- // overhead proof that the allocator shim is in place for this process.
- base::allocator::g_is_win_shim_layer_initialized = true;
- int old_mode = win_new_mode;
- win_new_mode = flag;
-
- base::allocator::SetCallNewHandlerOnMallocFailure(win_new_mode != 0);
-
- return old_mode;
-}
-
-// Replaces _query_new_mode in ucrt\heap\new_mode.cpp
-int _query_new_mode() {
- return win_new_mode;
-}
-
-// These symbols override the CRT's implementation of the same functions.
-__declspec(restrict) void* malloc(size_t size) {
- return ShimMalloc(size, nullptr);
-}
-
-void free(void* ptr) {
- ShimFree(ptr, nullptr);
-}
-
-__declspec(restrict) void* realloc(void* ptr, size_t size) {
- return ShimRealloc(ptr, size, nullptr);
-}
-
-__declspec(restrict) void* calloc(size_t n, size_t size) {
- return ShimCalloc(n, size, nullptr);
-}
-
-// The symbols
-// * __acrt_heap
-// * __acrt_initialize_heap
-// * __acrt_uninitialize_heap
-// * _get_heap_handle
-// must be overridden all or none, as they are otherwise supplied
-// by heap_handle.obj in the ucrt.lib file.
-HANDLE __acrt_heap = nullptr;
-
-bool __acrt_initialize_heap() {
- __acrt_heap = ::HeapCreate(0, 0, 0);
- return true;
-}
-
-bool __acrt_uninitialize_heap() {
- ::HeapDestroy(__acrt_heap);
- __acrt_heap = nullptr;
- return true;
-}
-
-intptr_t _get_heap_handle(void) {
- return reinterpret_cast<intptr_t>(__acrt_heap);
-}
-
-// The default dispatch translation unit has to define also the following
-// symbols (unless they are ultimately routed to the system symbols):
-// void malloc_stats(void);
-// int mallopt(int, int);
-// struct mallinfo mallinfo(void);
-// size_t malloc_size(void*);
-// size_t malloc_usable_size(const void*);
-
-} // extern "C"
diff --git a/base/allocator/debugallocation_shim.cc b/base/allocator/debugallocation_shim.cc
deleted file mode 100644
index 479cfca..0000000
--- a/base/allocator/debugallocation_shim.cc
+++ /dev/null
@@ -1,20 +0,0 @@
-// Copyright (c) 2012 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-// Workaround for crosbug:629593. Using AFDO on the tcmalloc files is
-// causing problems. The tcmalloc files depend on stack layouts and
-// AFDO can mess with them. Better not to use AFDO there. This is a
-// temporary hack. We will add a mechanism in the build system to
-// avoid using -fauto-profile for tcmalloc files.
-#if !defined(__clang__) && (defined(OS_CHROMEOS) || __GNUC__ > 5)
-// Note that this option only seems to be available in the chromeos GCC 4.9
-// toolchain, and stock GCC 5 and up.
-#pragma GCC optimize ("no-auto-profile")
-#endif
-
-#if defined(TCMALLOC_FOR_DEBUGALLOCATION)
-#include "third_party/tcmalloc/chromium/src/debugallocation.cc"
-#else
-#include "third_party/tcmalloc/chromium/src/tcmalloc.cc"
-#endif
diff --git a/base/allocator/malloc_zone_functions_mac.cc b/base/allocator/malloc_zone_functions_mac.cc
deleted file mode 100644
index 9a41496..0000000
--- a/base/allocator/malloc_zone_functions_mac.cc
+++ /dev/null
@@ -1,114 +0,0 @@
-// Copyright 2017 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#include "base/allocator/malloc_zone_functions_mac.h"
-
-#include "base/atomicops.h"
-#include "base/synchronization/lock.h"
-
-namespace base {
-namespace allocator {
-
-MallocZoneFunctions g_malloc_zones[kMaxZoneCount];
-static_assert(std::is_pod<MallocZoneFunctions>::value,
- "MallocZoneFunctions must be POD");
-
-void StoreZoneFunctions(const ChromeMallocZone* zone,
- MallocZoneFunctions* functions) {
- memset(functions, 0, sizeof(MallocZoneFunctions));
- functions->malloc = zone->malloc;
- functions->calloc = zone->calloc;
- functions->valloc = zone->valloc;
- functions->free = zone->free;
- functions->realloc = zone->realloc;
- functions->size = zone->size;
- CHECK(functions->malloc && functions->calloc && functions->valloc &&
- functions->free && functions->realloc && functions->size);
-
- // These functions might be nullptr.
- functions->batch_malloc = zone->batch_malloc;
- functions->batch_free = zone->batch_free;
-
- if (zone->version >= 5) {
- // Not all custom malloc zones have a memalign.
- functions->memalign = zone->memalign;
- }
- if (zone->version >= 6) {
- // This may be nullptr.
- functions->free_definite_size = zone->free_definite_size;
- }
-
- functions->context = zone;
-}
-
-namespace {
-
-// All modifications to g_malloc_zones are gated behind this lock.
-// Dispatch to a malloc zone does not need to acquire this lock.
-base::Lock& GetLock() {
- static base::Lock* g_lock = new base::Lock;
- return *g_lock;
-}
-
-void EnsureMallocZonesInitializedLocked() {
- GetLock().AssertAcquired();
-}
-
-int g_zone_count = 0;
-
-bool IsMallocZoneAlreadyStoredLocked(ChromeMallocZone* zone) {
- EnsureMallocZonesInitializedLocked();
- GetLock().AssertAcquired();
- for (int i = 0; i < g_zone_count; ++i) {
- if (g_malloc_zones[i].context == reinterpret_cast<void*>(zone))
- return true;
- }
- return false;
-}
-
-} // namespace
-
-bool StoreMallocZone(ChromeMallocZone* zone) {
- base::AutoLock l(GetLock());
- EnsureMallocZonesInitializedLocked();
- if (IsMallocZoneAlreadyStoredLocked(zone))
- return false;
-
- if (g_zone_count == kMaxZoneCount)
- return false;
-
- StoreZoneFunctions(zone, &g_malloc_zones[g_zone_count]);
- ++g_zone_count;
-
- // No other thread can possibly see these stores at this point. The code that
- // reads these values is triggered after this function returns. so we want to
- // guarantee that they are committed at this stage"
- base::subtle::MemoryBarrier();
- return true;
-}
-
-bool IsMallocZoneAlreadyStored(ChromeMallocZone* zone) {
- base::AutoLock l(GetLock());
- return IsMallocZoneAlreadyStoredLocked(zone);
-}
-
-bool DoesMallocZoneNeedReplacing(ChromeMallocZone* zone,
- const MallocZoneFunctions* functions) {
- return IsMallocZoneAlreadyStored(zone) && zone->malloc != functions->malloc;
-}
-
-int GetMallocZoneCountForTesting() {
- base::AutoLock l(GetLock());
- return g_zone_count;
-}
-
-void ClearAllMallocZonesForTesting() {
- base::AutoLock l(GetLock());
- EnsureMallocZonesInitializedLocked();
- memset(g_malloc_zones, 0, kMaxZoneCount * sizeof(MallocZoneFunctions));
- g_zone_count = 0;
-}
-
-} // namespace allocator
-} // namespace base
diff --git a/base/allocator/malloc_zone_functions_mac.h b/base/allocator/malloc_zone_functions_mac.h
deleted file mode 100644
index a7f5543..0000000
--- a/base/allocator/malloc_zone_functions_mac.h
+++ /dev/null
@@ -1,103 +0,0 @@
-// Copyright 2017 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#ifndef BASE_ALLOCATOR_MALLOC_ZONE_FUNCTIONS_MAC_H_
-#define BASE_ALLOCATOR_MALLOC_ZONE_FUNCTIONS_MAC_H_
-
-#include <malloc/malloc.h>
-#include <stddef.h>
-
-#include "base/base_export.h"
-#include "base/logging.h"
-#include "third_party/apple_apsl/malloc.h"
-
-namespace base {
-namespace allocator {
-
-typedef void* (*malloc_type)(struct _malloc_zone_t* zone, size_t size);
-typedef void* (*calloc_type)(struct _malloc_zone_t* zone,
- size_t num_items,
- size_t size);
-typedef void* (*valloc_type)(struct _malloc_zone_t* zone, size_t size);
-typedef void (*free_type)(struct _malloc_zone_t* zone, void* ptr);
-typedef void* (*realloc_type)(struct _malloc_zone_t* zone,
- void* ptr,
- size_t size);
-typedef void* (*memalign_type)(struct _malloc_zone_t* zone,
- size_t alignment,
- size_t size);
-typedef unsigned (*batch_malloc_type)(struct _malloc_zone_t* zone,
- size_t size,
- void** results,
- unsigned num_requested);
-typedef void (*batch_free_type)(struct _malloc_zone_t* zone,
- void** to_be_freed,
- unsigned num_to_be_freed);
-typedef void (*free_definite_size_type)(struct _malloc_zone_t* zone,
- void* ptr,
- size_t size);
-typedef size_t (*size_fn_type)(struct _malloc_zone_t* zone, const void* ptr);
-
-struct MallocZoneFunctions {
- malloc_type malloc;
- calloc_type calloc;
- valloc_type valloc;
- free_type free;
- realloc_type realloc;
- memalign_type memalign;
- batch_malloc_type batch_malloc;
- batch_free_type batch_free;
- free_definite_size_type free_definite_size;
- size_fn_type size;
- const ChromeMallocZone* context;
-};
-
-BASE_EXPORT void StoreZoneFunctions(const ChromeMallocZone* zone,
- MallocZoneFunctions* functions);
-static constexpr int kMaxZoneCount = 30;
-BASE_EXPORT extern MallocZoneFunctions g_malloc_zones[kMaxZoneCount];
-
-// The array g_malloc_zones stores all information about malloc zones before
-// they are shimmed. This information needs to be accessed during dispatch back
-// into the zone, and additional zones may be added later in the execution fo
-// the program, so the array needs to be both thread-safe and high-performance.
-//
-// We begin by creating an array of MallocZoneFunctions of fixed size. We will
-// never modify the container, which provides thread-safety to iterators. When
-// we want to add a MallocZoneFunctions to the container, we:
-// 1. Fill in all the fields.
-// 2. Update the total zone count.
-// 3. Insert a memory barrier.
-// 4. Insert our shim.
-//
-// Each MallocZoneFunctions is uniquely identified by |context|, which is a
-// pointer to the original malloc zone. When we wish to dispatch back to the
-// original malloc zones, we iterate through the array, looking for a matching
-// |context|.
-//
-// Most allocations go through the default allocator. We will ensure that the
-// default allocator is stored as the first MallocZoneFunctions.
-//
-// Returns whether the zone was successfully stored.
-BASE_EXPORT bool StoreMallocZone(ChromeMallocZone* zone);
-BASE_EXPORT bool IsMallocZoneAlreadyStored(ChromeMallocZone* zone);
-BASE_EXPORT bool DoesMallocZoneNeedReplacing(
- ChromeMallocZone* zone,
- const MallocZoneFunctions* functions);
-
-BASE_EXPORT int GetMallocZoneCountForTesting();
-BASE_EXPORT void ClearAllMallocZonesForTesting();
-
-inline MallocZoneFunctions& GetFunctionsForZone(void* zone) {
- for (unsigned int i = 0; i < kMaxZoneCount; ++i) {
- if (g_malloc_zones[i].context == zone)
- return g_malloc_zones[i];
- }
- IMMEDIATE_CRASH();
-}
-
-} // namespace allocator
-} // namespace base
-
-#endif // BASE_ALLOCATOR_MALLOC_ZONE_FUNCTIONS_MAC_H_
diff --git a/base/allocator/partition_allocator/OWNERS b/base/allocator/partition_allocator/OWNERS
deleted file mode 100644
index b0a2a85..0000000
--- a/base/allocator/partition_allocator/OWNERS
+++ /dev/null
@@ -1,8 +0,0 @@
-ajwong@chromium.org
-haraken@chromium.org
-palmer@chromium.org
-tsepez@chromium.org
-
-# TEAM: platform-architecture-dev@chromium.org
-# Also: security-dev@chromium.org
-# COMPONENT: Blink>MemoryAllocator>Partition
diff --git a/base/allocator/partition_allocator/PartitionAlloc.md b/base/allocator/partition_allocator/PartitionAlloc.md
deleted file mode 100644
index 982d91f..0000000
--- a/base/allocator/partition_allocator/PartitionAlloc.md
+++ /dev/null
@@ -1,102 +0,0 @@
-# PartitionAlloc Design
-
-This document describes PartitionAlloc at a high level. For documentation about
-its implementation, see the comments in `partition_alloc.h`.
-
-[TOC]
-
-## Overview
-
-PartitionAlloc is a memory allocator optimized for security, low allocation
-latency (when called appropriately), and good space efficiency (when called
-appropriately). This document aims to help you understand how PartitionAlloc
-works so that you can use it effectively.
-
-## Partitions And Buckets
-
-A *partition* is a heap that contains certain object types, objects of certain
-sizes, or objects of a certain lifetime (as the caller prefers). Callers can
-create as many partitions as they need. Each partition is separate and protected
-from any other partitions.
-
-Each partition holds multiple buckets. A *bucket* is a region in a partition
-that contains similar-sized objects.
-
-PartitionAlloc aligns each object allocation with the closest bucket size. For
-example, if a partition has 3 buckets for 64 bytes, 256 bytes, and 1024 bytes,
-then PartitionAlloc will satisfy an allocation request for 128 bytes by rounding
-it up to 256 bytes and allocating from the second bucket.
-
-The special allocator class `template <size_t N> class
-SizeSpecificPartitionAllocator` will satisfy allocations only of size
-`kMaxAllocation = N - kAllocationGranularity` or less, and contains buckets for
-all `n * kAllocationGranularity` (n = 1, 2, ..., `kMaxAllocation`). Attempts to
-allocate more than `kMaxAllocation` will fail.
-
-## Performance
-
-The current implementation is optimized for the main thread use-case. For
-example, PartitionAlloc doesn't have threaded caches.
-
-PartitionAlloc is designed to be extremely fast in its fast paths. The fast
-paths of allocation and deallocation require just 2 (reasonably predictable)
-branches. The number of operations in the fast paths is minimal, leading to the
-possibility of inlining.
-
-For an example of how to use partitions to get good performance and good safety,
-see Blink's usage, as described in `wtf/allocator/Allocator.md`.
-
-Large allocations (> kGenericMaxBucketed == 960KB) are realized by direct
-memory mmapping. This size makes sense because 960KB = 0xF0000. The next larger
-bucket size is 1MB = 0x100000 which is greater than 1/2 the available space in
-a SuperPage meaning it would not be possible to pack even 2 sequential
-alloctions in a SuperPage.
-
-`PartitionRootGeneric::Alloc()` acquires a lock for thread safety. (The current
-implementation uses a spin lock on the assumption that thread contention will be
-rare in its callers. The original caller was Blink, where this is generally
-true. Spin locks also have the benefit of simplicity.)
-
-Callers can get thread-unsafe performance using a
-`SizeSpecificPartitionAllocator` or otherwise using `PartitionAlloc` (instead of
-`PartitionRootGeneric::Alloc()`). Callers can also arrange for low contention,
-such as by using a dedicated partition for single-threaded, latency-critical
-allocations.
-
-Because PartitionAlloc guarantees that address space regions used for one
-partition are never reused for other partitions, partitions can eat a large
-amount of virtual address space (even if not of actual memory).
-
-Mixing various random objects in the same partition will generally lead to lower
-efficiency. For good performance, group similar objects into the same partition.
-
-## Security
-
-Security is one of the most important goals of PartitionAlloc.
-
-PartitionAlloc guarantees that different partitions exist in different regions
-of the process' address space. When the caller has freed all objects contained
-in a page in a partition, PartitionAlloc returns the physical memory to the
-operating system, but continues to reserve the region of address space.
-PartitionAlloc will only reuse an address space region for the same partition.
-
-PartitionAlloc also guarantees that:
-
-* Linear overflows cannot corrupt into the partition. (There is a guard page at
-the beginning of each partition.)
-
-* Linear overflows cannot corrupt out of the partition. (There is a guard page
-at the end of each partition.)
-
-* Linear overflow or underflow cannot corrupt the allocation metadata.
-PartitionAlloc records metadata in a dedicated region out-of-line (not adjacent
-to objects).
-
-* Objects of different sizes will likely be allocated in different buckets, and
-hence at different addresses. One page can contain only similar-sized objects.
-
-* Dereference of a freelist pointer should fault.
-
-* Partial pointer overwrite of freelist pointer should fault.
-
-* Large allocations have guard pages at the beginning and end.
diff --git a/base/allocator/partition_allocator/address_space_randomization.cc b/base/allocator/partition_allocator/address_space_randomization.cc
deleted file mode 100644
index b25fbdc..0000000
--- a/base/allocator/partition_allocator/address_space_randomization.cc
+++ /dev/null
@@ -1,124 +0,0 @@
-// Copyright 2014 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#include "base/allocator/partition_allocator/address_space_randomization.h"
-
-#include "base/allocator/partition_allocator/page_allocator.h"
-#include "base/allocator/partition_allocator/spin_lock.h"
-#include "base/lazy_instance.h"
-#include "base/rand_util.h"
-#include "build_config.h"
-
-#if defined(OS_WIN)
-#include <windows.h> // Must be in front of other Windows header files.
-
-#include <VersionHelpers.h>
-#endif
-
-namespace base {
-
-namespace {
-
-// This is the same PRNG as used by tcmalloc for mapping address randomness;
-// see http://burtleburtle.net/bob/rand/smallprng.html
-struct ranctx {
- subtle::SpinLock lock;
- bool initialized;
- uint32_t a;
- uint32_t b;
- uint32_t c;
- uint32_t d;
-};
-
-static LazyInstance<ranctx>::Leaky s_ranctx = LAZY_INSTANCE_INITIALIZER;
-
-#define rot(x, k) (((x) << (k)) | ((x) >> (32 - (k))))
-
-uint32_t ranvalInternal(ranctx* x) {
- uint32_t e = x->a - rot(x->b, 27);
- x->a = x->b ^ rot(x->c, 17);
- x->b = x->c + x->d;
- x->c = x->d + e;
- x->d = e + x->a;
- return x->d;
-}
-
-#undef rot
-
-uint32_t ranval(ranctx* x) {
- subtle::SpinLock::Guard guard(x->lock);
- if (UNLIKELY(!x->initialized)) {
- const uint64_t r1 = RandUint64();
- const uint64_t r2 = RandUint64();
-
- x->a = static_cast<uint32_t>(r1);
- x->b = static_cast<uint32_t>(r1 >> 32);
- x->c = static_cast<uint32_t>(r2);
- x->d = static_cast<uint32_t>(r2 >> 32);
-
- x->initialized = true;
- }
-
- return ranvalInternal(x);
-}
-
-} // namespace
-
-void SetRandomPageBaseSeed(int64_t seed) {
- ranctx* x = s_ranctx.Pointer();
- subtle::SpinLock::Guard guard(x->lock);
- // Set RNG to initial state.
- x->initialized = true;
- x->a = x->b = static_cast<uint32_t>(seed);
- x->c = x->d = static_cast<uint32_t>(seed >> 32);
-}
-
-void* GetRandomPageBase() {
- uintptr_t random = static_cast<uintptr_t>(ranval(s_ranctx.Pointer()));
-
-#if defined(ARCH_CPU_64_BITS)
- random <<= 32ULL;
- random |= static_cast<uintptr_t>(ranval(s_ranctx.Pointer()));
-
-// The kASLRMask and kASLROffset constants will be suitable for the
-// OS and build configuration.
-#if defined(OS_WIN) && !defined(MEMORY_TOOL_REPLACES_ALLOCATOR)
- // Windows >= 8.1 has the full 47 bits. Use them where available.
- static bool windows_81 = false;
- static bool windows_81_initialized = false;
- if (!windows_81_initialized) {
- windows_81 = IsWindows8Point1OrGreater();
- windows_81_initialized = true;
- }
- if (!windows_81) {
- random &= internal::kASLRMaskBefore8_10;
- } else {
- random &= internal::kASLRMask;
- }
- random += internal::kASLROffset;
-#else
- random &= internal::kASLRMask;
- random += internal::kASLROffset;
-#endif // defined(OS_WIN) && !defined(MEMORY_TOOL_REPLACES_ALLOCATOR)
-#else // defined(ARCH_CPU_32_BITS)
-#if defined(OS_WIN)
- // On win32 host systems the randomization plus huge alignment causes
- // excessive fragmentation. Plus most of these systems lack ASLR, so the
- // randomization isn't buying anything. In that case we just skip it.
- // TODO(jschuh): Just dump the randomization when HE-ASLR is present.
- static BOOL is_wow64 = -1;
- if (is_wow64 == -1 && !IsWow64Process(GetCurrentProcess(), &is_wow64))
- is_wow64 = FALSE;
- if (!is_wow64)
- return nullptr;
-#endif // defined(OS_WIN)
- random &= internal::kASLRMask;
- random += internal::kASLROffset;
-#endif // defined(ARCH_CPU_32_BITS)
-
- DCHECK_EQ(0ULL, (random & kPageAllocationGranularityOffsetMask));
- return reinterpret_cast<void*>(random);
-}
-
-} // namespace base
diff --git a/base/allocator/partition_allocator/address_space_randomization.h b/base/allocator/partition_allocator/address_space_randomization.h
deleted file mode 100644
index ab40e2b..0000000
--- a/base/allocator/partition_allocator/address_space_randomization.h
+++ /dev/null
@@ -1,206 +0,0 @@
-// Copyright 2014 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#ifndef BASE_ALLOCATOR_PARTITION_ALLOCATOR_ADDRESS_SPACE_RANDOMIZATION
-#define BASE_ALLOCATOR_PARTITION_ALLOCATOR_ADDRESS_SPACE_RANDOMIZATION
-
-#include "base/allocator/partition_allocator/page_allocator.h"
-#include "base/base_export.h"
-#include "build_config.h"
-
-namespace base {
-
-// Sets the seed for the random number generator used by GetRandomPageBase in
-// order to generate a predictable sequence of addresses. May be called multiple
-// times.
-BASE_EXPORT void SetRandomPageBaseSeed(int64_t seed);
-
-// Calculates a random preferred mapping address. In calculating an address, we
-// balance good ASLR against not fragmenting the address space too badly.
-BASE_EXPORT void* GetRandomPageBase();
-
-namespace internal {
-
-constexpr uintptr_t AslrAddress(uintptr_t mask) {
- return mask & kPageAllocationGranularityBaseMask;
-}
-constexpr uintptr_t AslrMask(uintptr_t bits) {
- return AslrAddress((1ULL << bits) - 1ULL);
-}
-
-// Turn off formatting, because the thicket of nested ifdefs below is
-// incomprehensible without indentation. It is also incomprehensible with
-// indentation, but the only other option is a combinatorial explosion of
-// *_{win,linux,mac,foo}_{32,64}.h files.
-//
-// clang-format off
-
-#if defined(ARCH_CPU_64_BITS)
-
- #if defined(MEMORY_TOOL_REPLACES_ALLOCATOR)
-
- // We shouldn't allocate system pages at all for sanitizer builds. However,
- // we do, and if random hint addresses interfere with address ranges
- // hard-coded in those tools, bad things happen. This address range is
- // copied from TSAN source but works with all tools. See
- // https://crbug.com/539863.
- constexpr uintptr_t kASLRMask = AslrAddress(0x007fffffffffULL);
- constexpr uintptr_t kASLROffset = AslrAddress(0x7e8000000000ULL);
-
- #elif defined(OS_WIN)
-
- // Windows 8.10 and newer support the full 48 bit address range. Older
- // versions of Windows only support 44 bits. Since kASLROffset is non-zero
- // and may cause a carry, use 47 and 43 bit masks. See
- // http://www.alex-ionescu.com/?p=246
- constexpr uintptr_t kASLRMask = AslrMask(47);
- constexpr uintptr_t kASLRMaskBefore8_10 = AslrMask(43);
- // Try not to map pages into the range where Windows loads DLLs by default.
- constexpr uintptr_t kASLROffset = 0x80000000ULL;
-
- #elif defined(OS_MACOSX)
-
- // macOS as of 10.12.5 does not clean up entries in page map levels 3/4
- // [PDP/PML4] created from mmap or mach_vm_allocate, even after the region
- // is destroyed. Using a virtual address space that is too large causes a
- // leak of about 1 wired [can never be paged out] page per call to mmap. The
- // page is only reclaimed when the process is killed. Confine the hint to a
- // 39-bit section of the virtual address space.
- //
- // This implementation adapted from
- // https://chromium-review.googlesource.com/c/v8/v8/+/557958. The difference
- // is that here we clamp to 39 bits, not 32.
- //
- // TODO(crbug.com/738925): Remove this limitation if/when the macOS behavior
- // changes.
- constexpr uintptr_t kASLRMask = AslrMask(38);
- constexpr uintptr_t kASLROffset = AslrAddress(0x1000000000ULL);
-
- #elif defined(OS_POSIX) || defined(OS_FUCHSIA)
-
- #if defined(ARCH_CPU_X86_64)
-
- // Linux (and macOS) support the full 47-bit user space of x64 processors.
- // Use only 46 to allow the kernel a chance to fulfill the request.
- constexpr uintptr_t kASLRMask = AslrMask(46);
- constexpr uintptr_t kASLROffset = AslrAddress(0);
-
- #elif defined(ARCH_CPU_ARM64)
-
- #if defined(OS_ANDROID)
-
- // Restrict the address range on Android to avoid a large performance
- // regression in single-process WebViews. See https://crbug.com/837640.
- constexpr uintptr_t kASLRMask = AslrMask(30);
- constexpr uintptr_t kASLROffset = AslrAddress(0x20000000ULL);
-
- #else
-
- // ARM64 on Linux has 39-bit user space. Use 38 bits since kASLROffset
- // could cause a carry.
- constexpr uintptr_t kASLRMask = AslrMask(38);
- constexpr uintptr_t kASLROffset = AslrAddress(0x1000000000ULL);
-
- #endif
-
- #elif defined(ARCH_CPU_PPC64)
-
- #if defined(OS_AIX)
-
- // AIX has 64 bits of virtual addressing, but we limit the address range
- // to (a) minimize segment lookaside buffer (SLB) misses; and (b) use
- // extra address space to isolate the mmap regions.
- constexpr uintptr_t kASLRMask = AslrMask(30);
- constexpr uintptr_t kASLROffset = AslrAddress(0x400000000000ULL);
-
- #elif defined(ARCH_CPU_BIG_ENDIAN)
-
- // Big-endian Linux PPC has 44 bits of virtual addressing. Use 42.
- constexpr uintptr_t kASLRMask = AslrMask(42);
- constexpr uintptr_t kASLROffset = AslrAddress(0);
-
- #else // !defined(OS_AIX) && !defined(ARCH_CPU_BIG_ENDIAN)
-
- // Little-endian Linux PPC has 48 bits of virtual addressing. Use 46.
- constexpr uintptr_t kASLRMask = AslrMask(46);
- constexpr uintptr_t kASLROffset = AslrAddress(0);
-
- #endif // !defined(OS_AIX) && !defined(ARCH_CPU_BIG_ENDIAN)
-
- #elif defined(ARCH_CPU_S390X)
-
- // Linux on Z uses bits 22 - 32 for Region Indexing, which translates to
- // 42 bits of virtual addressing. Truncate to 40 bits to allow kernel a
- // chance to fulfill the request.
- constexpr uintptr_t kASLRMask = AslrMask(40);
- constexpr uintptr_t kASLROffset = AslrAddress(0);
-
- #elif defined(ARCH_CPU_S390)
-
- // 31 bits of virtual addressing. Truncate to 29 bits to allow the kernel
- // a chance to fulfill the request.
- constexpr uintptr_t kASLRMask = AslrMask(29);
- constexpr uintptr_t kASLROffset = AslrAddress(0);
-
- #else // !defined(ARCH_CPU_X86_64) && !defined(ARCH_CPU_PPC64) &&
- // !defined(ARCH_CPU_S390X) && !defined(ARCH_CPU_S390)
-
- // For all other POSIX variants, use 30 bits.
- constexpr uintptr_t kASLRMask = AslrMask(30);
-
- #if defined(OS_SOLARIS)
-
- // For our Solaris/illumos mmap hint, we pick a random address in the
- // bottom half of the top half of the address space (that is, the third
- // quarter). Because we do not MAP_FIXED, this will be treated only as a
- // hint -- the system will not fail to mmap because something else
- // happens to already be mapped at our random address. We deliberately
- // set the hint high enough to get well above the system's break (that
- // is, the heap); Solaris and illumos will try the hint and if that
- // fails allocate as if there were no hint at all. The high hint
- // prevents the break from getting hemmed in at low values, ceding half
- // of the address space to the system heap.
- constexpr uintptr_t kASLROffset = AslrAddress(0x80000000ULL);
-
- #elif defined(OS_AIX)
-
- // The range 0x30000000 - 0xD0000000 is available on AIX; choose the
- // upper range.
- constexpr uintptr_t kASLROffset = AslrAddress(0x90000000ULL);
-
- #else // !defined(OS_SOLARIS) && !defined(OS_AIX)
-
- // The range 0x20000000 - 0x60000000 is relatively unpopulated across a
- // variety of ASLR modes (PAE kernel, NX compat mode, etc) and on macOS
- // 10.6 and 10.7.
- constexpr uintptr_t kASLROffset = AslrAddress(0x20000000ULL);
-
- #endif // !defined(OS_SOLARIS) && !defined(OS_AIX)
-
- #endif // !defined(ARCH_CPU_X86_64) && !defined(ARCH_CPU_PPC64) &&
- // !defined(ARCH_CPU_S390X) && !defined(ARCH_CPU_S390)
-
- #endif // defined(OS_POSIX)
-
-#elif defined(ARCH_CPU_32_BITS)
-
- // This is a good range on 32-bit Windows and Android (the only platforms on
- // which we support 32-bitness). Allocates in the 0.5 - 1.5 GiB region. There
- // is no issue with carries here.
- constexpr uintptr_t kASLRMask = AslrMask(30);
- constexpr uintptr_t kASLROffset = AslrAddress(0x20000000ULL);
-
-#else
-
- #error Please tell us about your exotic hardware! Sounds interesting.
-
-#endif // defined(ARCH_CPU_32_BITS)
-
-// clang-format on
-
-} // namespace internal
-
-} // namespace base
-
-#endif // BASE_ALLOCATOR_PARTITION_ALLOCATOR_ADDRESS_SPACE_RANDOMIZATION
diff --git a/base/allocator/partition_allocator/oom.h b/base/allocator/partition_allocator/oom.h
deleted file mode 100644
index e2d197c..0000000
--- a/base/allocator/partition_allocator/oom.h
+++ /dev/null
@@ -1,37 +0,0 @@
-// Copyright (c) 2016 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#ifndef BASE_ALLOCATOR_OOM_H
-#define BASE_ALLOCATOR_OOM_H
-
-#include "base/logging.h"
-
-#if defined(OS_WIN)
-#include <windows.h>
-#endif
-
-// Do not want trivial entry points just calling OOM_CRASH() to be
-// commoned up by linker icf/comdat folding.
-#define OOM_CRASH_PREVENT_ICF() \
- volatile int oom_crash_inhibit_icf = __LINE__; \
- ALLOW_UNUSED_LOCAL(oom_crash_inhibit_icf)
-
-// OOM_CRASH() - Specialization of IMMEDIATE_CRASH which will raise a custom
-// exception on Windows to signal this is OOM and not a normal assert.
-#if defined(OS_WIN)
-#define OOM_CRASH() \
- do { \
- OOM_CRASH_PREVENT_ICF(); \
- ::RaiseException(0xE0000008, EXCEPTION_NONCONTINUABLE, 0, nullptr); \
- IMMEDIATE_CRASH(); \
- } while (0)
-#else
-#define OOM_CRASH() \
- do { \
- OOM_CRASH_PREVENT_ICF(); \
- IMMEDIATE_CRASH(); \
- } while (0)
-#endif
-
-#endif // BASE_ALLOCATOR_OOM_H
diff --git a/base/allocator/partition_allocator/page_allocator.cc b/base/allocator/partition_allocator/page_allocator.cc
deleted file mode 100644
index 72e34df..0000000
--- a/base/allocator/partition_allocator/page_allocator.cc
+++ /dev/null
@@ -1,258 +0,0 @@
-// Copyright (c) 2013 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#include "base/allocator/partition_allocator/page_allocator.h"
-
-#include <limits.h>
-
-#include "base/allocator/partition_allocator/address_space_randomization.h"
-#include "base/allocator/partition_allocator/page_allocator_internal.h"
-#include "base/allocator/partition_allocator/spin_lock.h"
-#include "base/base_export.h"
-#include "base/compiler_specific.h"
-#include "base/lazy_instance.h"
-#include "base/logging.h"
-#include "base/numerics/checked_math.h"
-#include "build_config.h"
-
-#include <atomic>
-
-#if defined(OS_WIN)
-#include <windows.h>
-#endif
-
-#if defined(OS_WIN)
-#include "base/allocator/partition_allocator/page_allocator_internals_win.h"
-#elif defined(OS_POSIX) || defined(OS_FUCHSIA)
-#include "base/allocator/partition_allocator/page_allocator_internals_posix.h"
-#else
-#error Platform not supported.
-#endif
-
-namespace base {
-
-namespace {
-
-// We may reserve/release address space on different threads.
-LazyInstance<subtle::SpinLock>::Leaky s_reserveLock = LAZY_INSTANCE_INITIALIZER;
-
-// We only support a single block of reserved address space.
-void* s_reservation_address = nullptr;
-size_t s_reservation_size = 0;
-
-void* AllocPagesIncludingReserved(void* address,
- size_t length,
- PageAccessibilityConfiguration accessibility,
- PageTag page_tag,
- bool commit) {
- void* ret =
- SystemAllocPages(address, length, accessibility, page_tag, commit);
- if (ret == nullptr) {
- const bool cant_alloc_length = kHintIsAdvisory || address == nullptr;
- if (cant_alloc_length) {
- // The system cannot allocate |length| bytes. Release any reserved address
- // space and try once more.
- ReleaseReservation();
- ret = SystemAllocPages(address, length, accessibility, page_tag, commit);
- }
- }
- return ret;
-}
-
-// Trims |base| to given |trim_length| and |alignment|.
-//
-// On failure, on Windows, this function returns nullptr and frees |base|.
-void* TrimMapping(void* base,
- size_t base_length,
- size_t trim_length,
- uintptr_t alignment,
- PageAccessibilityConfiguration accessibility,
- bool commit) {
- size_t pre_slack = reinterpret_cast<uintptr_t>(base) & (alignment - 1);
- if (pre_slack) {
- pre_slack = alignment - pre_slack;
- }
- size_t post_slack = base_length - pre_slack - trim_length;
- DCHECK(base_length >= trim_length || pre_slack || post_slack);
- DCHECK(pre_slack < base_length);
- DCHECK(post_slack < base_length);
- return TrimMappingInternal(base, base_length, trim_length, accessibility,
- commit, pre_slack, post_slack);
-}
-
-} // namespace
-
-void* SystemAllocPages(void* hint,
- size_t length,
- PageAccessibilityConfiguration accessibility,
- PageTag page_tag,
- bool commit) {
- DCHECK(!(length & kPageAllocationGranularityOffsetMask));
- DCHECK(!(reinterpret_cast<uintptr_t>(hint) &
- kPageAllocationGranularityOffsetMask));
- DCHECK(commit || accessibility == PageInaccessible);
- return SystemAllocPagesInternal(hint, length, accessibility, page_tag,
- commit);
-}
-
-void* AllocPages(void* address,
- size_t length,
- size_t align,
- PageAccessibilityConfiguration accessibility,
- PageTag page_tag,
- bool commit) {
- DCHECK(length >= kPageAllocationGranularity);
- DCHECK(!(length & kPageAllocationGranularityOffsetMask));
- DCHECK(align >= kPageAllocationGranularity);
- // Alignment must be power of 2 for masking math to work.
- DCHECK_EQ(align & (align - 1), 0UL);
- DCHECK(!(reinterpret_cast<uintptr_t>(address) &
- kPageAllocationGranularityOffsetMask));
- uintptr_t align_offset_mask = align - 1;
- uintptr_t align_base_mask = ~align_offset_mask;
- DCHECK(!(reinterpret_cast<uintptr_t>(address) & align_offset_mask));
-
-#if defined(OS_LINUX) && defined(ARCH_CPU_64_BITS)
- // On 64 bit Linux, we may need to adjust the address space limit for
- // guarded allocations.
- if (length >= kMinimumGuardedMemorySize) {
- CHECK_EQ(PageInaccessible, accessibility);
- CHECK(!commit);
- if (!AdjustAddressSpaceLimit(base::checked_cast<int64_t>(length))) {
- DLOG(WARNING) << "Could not adjust address space by " << length;
- // Fall through. Try the allocation, since we may have a reserve.
- }
- }
-#endif
-
- // If the client passed null as the address, choose a good one.
- if (address == nullptr) {
- address = GetRandomPageBase();
- address = reinterpret_cast<void*>(reinterpret_cast<uintptr_t>(address) &
- align_base_mask);
- }
-
- // First try to force an exact-size, aligned allocation from our random base.
-#if defined(ARCH_CPU_32_BITS)
- // On 32 bit systems, first try one random aligned address, and then try an
- // aligned address derived from the value of |ret|.
- constexpr int kExactSizeTries = 2;
-#else
- // On 64 bit systems, try 3 random aligned addresses.
- constexpr int kExactSizeTries = 3;
-#endif
-
- for (int i = 0; i < kExactSizeTries; ++i) {
- void* ret = AllocPagesIncludingReserved(address, length, accessibility,
- page_tag, commit);
- if (ret != nullptr) {
- // If the alignment is to our liking, we're done.
- if (!(reinterpret_cast<uintptr_t>(ret) & align_offset_mask))
- return ret;
- // Free the memory and try again.
- FreePages(ret, length);
- } else {
- // |ret| is null; if this try was unhinted, we're OOM.
- if (kHintIsAdvisory || address == nullptr)
- return nullptr;
- }
-
-#if defined(ARCH_CPU_32_BITS)
- // For small address spaces, try the first aligned address >= |ret|. Note
- // |ret| may be null, in which case |address| becomes null.
- address = reinterpret_cast<void*>(
- (reinterpret_cast<uintptr_t>(ret) + align_offset_mask) &
- align_base_mask);
-#else // defined(ARCH_CPU_64_BITS)
- // Keep trying random addresses on systems that have a large address space.
- address = GetRandomPageBase();
- address = reinterpret_cast<void*>(reinterpret_cast<uintptr_t>(address) &
- align_base_mask);
-#endif
- }
-
- // Make a larger allocation so we can force alignment.
- size_t try_length = length + (align - kPageAllocationGranularity);
- CHECK(try_length >= length);
- void* ret;
-
- do {
- // Continue randomizing only on POSIX.
- address = kHintIsAdvisory ? GetRandomPageBase() : nullptr;
- ret = AllocPagesIncludingReserved(address, try_length, accessibility,
- page_tag, commit);
- // The retries are for Windows, where a race can steal our mapping on
- // resize.
- } while (ret != nullptr &&
- (ret = TrimMapping(ret, try_length, length, align, accessibility,
- commit)) == nullptr);
-
- return ret;
-}
-
-void FreePages(void* address, size_t length) {
- DCHECK(!(reinterpret_cast<uintptr_t>(address) &
- kPageAllocationGranularityOffsetMask));
- DCHECK(!(length & kPageAllocationGranularityOffsetMask));
- FreePagesInternal(address, length);
-}
-
-bool SetSystemPagesAccess(void* address,
- size_t length,
- PageAccessibilityConfiguration accessibility) {
- DCHECK(!(length & kSystemPageOffsetMask));
- return SetSystemPagesAccessInternal(address, length, accessibility);
-}
-
-void DecommitSystemPages(void* address, size_t length) {
- DCHECK_EQ(0UL, length & kSystemPageOffsetMask);
- DecommitSystemPagesInternal(address, length);
-}
-
-bool RecommitSystemPages(void* address,
- size_t length,
- PageAccessibilityConfiguration accessibility) {
- DCHECK_EQ(0UL, length & kSystemPageOffsetMask);
- DCHECK_NE(PageInaccessible, accessibility);
- return RecommitSystemPagesInternal(address, length, accessibility);
-}
-
-void DiscardSystemPages(void* address, size_t length) {
- DCHECK_EQ(0UL, length & kSystemPageOffsetMask);
- DiscardSystemPagesInternal(address, length);
-}
-
-bool ReserveAddressSpace(size_t size) {
- // To avoid deadlock, call only SystemAllocPages.
- subtle::SpinLock::Guard guard(s_reserveLock.Get());
- if (s_reservation_address == nullptr) {
- void* mem = SystemAllocPages(nullptr, size, PageInaccessible,
- PageTag::kChromium, false);
- if (mem != nullptr) {
- // We guarantee this alignment when reserving address space.
- DCHECK(!(reinterpret_cast<uintptr_t>(mem) &
- kPageAllocationGranularityOffsetMask));
- s_reservation_address = mem;
- s_reservation_size = size;
- return true;
- }
- }
- return false;
-}
-
-void ReleaseReservation() {
- // To avoid deadlock, call only FreePages.
- subtle::SpinLock::Guard guard(s_reserveLock.Get());
- if (s_reservation_address != nullptr) {
- FreePages(s_reservation_address, s_reservation_size);
- s_reservation_address = nullptr;
- s_reservation_size = 0;
- }
-}
-
-uint32_t GetAllocPageErrorCode() {
- return s_allocPageErrorCode;
-}
-
-} // namespace base
diff --git a/base/allocator/partition_allocator/page_allocator.h b/base/allocator/partition_allocator/page_allocator.h
deleted file mode 100644
index 0db2fde..0000000
--- a/base/allocator/partition_allocator/page_allocator.h
+++ /dev/null
@@ -1,179 +0,0 @@
-// Copyright (c) 2013 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#ifndef BASE_ALLOCATOR_PARTITION_ALLOCATOR_PAGE_ALLOCATOR_H
-#define BASE_ALLOCATOR_PARTITION_ALLOCATOR_PAGE_ALLOCATOR_H
-
-#include <stdint.h>
-
-#include <cstddef>
-
-#include "base/allocator/partition_allocator/page_allocator_constants.h"
-#include "base/base_export.h"
-#include "base/compiler_specific.h"
-#include "build_config.h"
-
-namespace base {
-
-enum PageAccessibilityConfiguration {
- PageInaccessible,
- PageRead,
- PageReadWrite,
- PageReadExecute,
- // This flag is deprecated and will go away soon.
- // TODO(bbudge) Remove this as soon as V8 doesn't need RWX pages.
- PageReadWriteExecute,
-};
-
-// Mac OSX supports tagged memory regions, to help in debugging.
-enum class PageTag {
- kFirst = 240, // Minimum tag value.
- kChromium = 254, // Chromium page, including off-heap V8 ArrayBuffers.
- kV8 = 255, // V8 heap pages.
- kLast = kV8 // Maximum tag value.
-};
-
-// Allocate one or more pages.
-//
-// The requested |address| is just a hint; the actual address returned may
-// differ. The returned address will be aligned at least to |align| bytes.
-// |length| is in bytes, and must be a multiple of |kPageAllocationGranularity|.
-// |align| is in bytes, and must be a power-of-two multiple of
-// |kPageAllocationGranularity|.
-//
-// If |address| is null, then a suitable and randomized address will be chosen
-// automatically.
-//
-// |page_accessibility| controls the permission of the allocated pages.
-//
-// This call will return null if the allocation cannot be satisfied.
-BASE_EXPORT void* AllocPages(void* address,
- size_t length,
- size_t align,
- PageAccessibilityConfiguration page_accessibility,
- PageTag tag = PageTag::kChromium,
- bool commit = true);
-
-// Free one or more pages starting at |address| and continuing for |length|
-// bytes.
-//
-// |address| and |length| must match a previous call to |AllocPages|. Therefore,
-// |address| must be aligned to |kPageAllocationGranularity| bytes, and |length|
-// must be a multiple of |kPageAllocationGranularity|.
-BASE_EXPORT void FreePages(void* address, size_t length);
-
-// Mark one or more system pages, starting at |address| with the given
-// |page_accessibility|. |length| must be a multiple of |kSystemPageSize| bytes.
-//
-// Returns true if the permission change succeeded. In most cases you must
-// |CHECK| the result.
-BASE_EXPORT WARN_UNUSED_RESULT bool SetSystemPagesAccess(
- void* address,
- size_t length,
- PageAccessibilityConfiguration page_accessibility);
-
-// Decommit one or more system pages starting at |address| and continuing for
-// |length| bytes. |length| must be a multiple of |kSystemPageSize|.
-//
-// Decommitted means that physical resources (RAM or swap) backing the allocated
-// virtual address range are released back to the system, but the address space
-// is still allocated to the process (possibly using up page table entries or
-// other accounting resources). Any access to a decommitted region of memory
-// is an error and will generate a fault.
-//
-// This operation is not atomic on all platforms.
-//
-// Note: "Committed memory" is a Windows Memory Subsystem concept that ensures
-// processes will not fault when touching a committed memory region. There is
-// no analogue in the POSIX memory API where virtual memory pages are
-// best-effort allocated resources on the first touch. To create a
-// platform-agnostic abstraction, this API simulates the Windows "decommit"
-// state by both discarding the region (allowing the OS to avoid swap
-// operations) and changing the page protections so accesses fault.
-//
-// TODO(ajwong): This currently does not change page protections on POSIX
-// systems due to a perf regression. Tracked at http://crbug.com/766882.
-BASE_EXPORT void DecommitSystemPages(void* address, size_t length);
-
-// Recommit one or more system pages, starting at |address| and continuing for
-// |length| bytes with the given |page_accessibility|. |length| must be a
-// multiple of |kSystemPageSize|.
-//
-// Decommitted system pages must be recommitted with their original permissions
-// before they are used again.
-//
-// Returns true if the recommit change succeeded. In most cases you must |CHECK|
-// the result.
-BASE_EXPORT WARN_UNUSED_RESULT bool RecommitSystemPages(
- void* address,
- size_t length,
- PageAccessibilityConfiguration page_accessibility);
-
-// Discard one or more system pages starting at |address| and continuing for
-// |length| bytes. |length| must be a multiple of |kSystemPageSize|.
-//
-// Discarding is a hint to the system that the page is no longer required. The
-// hint may:
-// - Do nothing.
-// - Discard the page immediately, freeing up physical pages.
-// - Discard the page at some time in the future in response to memory
-// pressure.
-//
-// Only committed pages should be discarded. Discarding a page does not decommit
-// it, and it is valid to discard an already-discarded page. A read or write to
-// a discarded page will not fault.
-//
-// Reading from a discarded page may return the original page content, or a page
-// full of zeroes.
-//
-// Writing to a discarded page is the only guaranteed way to tell the system
-// that the page is required again. Once written to, the content of the page is
-// guaranteed stable once more. After being written to, the page content may be
-// based on the original page content, or a page of zeroes.
-BASE_EXPORT void DiscardSystemPages(void* address, size_t length);
-
-// Rounds up |address| to the next multiple of |kSystemPageSize|. Returns
-// 0 for an |address| of 0.
-constexpr ALWAYS_INLINE uintptr_t RoundUpToSystemPage(uintptr_t address) {
- return (address + kSystemPageOffsetMask) & kSystemPageBaseMask;
-}
-
-// Rounds down |address| to the previous multiple of |kSystemPageSize|. Returns
-// 0 for an |address| of 0.
-constexpr ALWAYS_INLINE uintptr_t RoundDownToSystemPage(uintptr_t address) {
- return address & kSystemPageBaseMask;
-}
-
-// Rounds up |address| to the next multiple of |kPageAllocationGranularity|.
-// Returns 0 for an |address| of 0.
-constexpr ALWAYS_INLINE uintptr_t
-RoundUpToPageAllocationGranularity(uintptr_t address) {
- return (address + kPageAllocationGranularityOffsetMask) &
- kPageAllocationGranularityBaseMask;
-}
-
-// Rounds down |address| to the previous multiple of
-// |kPageAllocationGranularity|. Returns 0 for an |address| of 0.
-constexpr ALWAYS_INLINE uintptr_t
-RoundDownToPageAllocationGranularity(uintptr_t address) {
- return address & kPageAllocationGranularityBaseMask;
-}
-
-// Reserves (at least) |size| bytes of address space, aligned to
-// |kPageAllocationGranularity|. This can be called early on to make it more
-// likely that large allocations will succeed. Returns true if the reservation
-// succeeded, false if the reservation failed or a reservation was already made.
-BASE_EXPORT bool ReserveAddressSpace(size_t size);
-
-// Releases any reserved address space. |AllocPages| calls this automatically on
-// an allocation failure. External allocators may also call this on failure.
-BASE_EXPORT void ReleaseReservation();
-
-// Returns |errno| (POSIX) or the result of |GetLastError| (Windows) when |mmap|
-// (POSIX) or |VirtualAlloc| (Windows) fails.
-BASE_EXPORT uint32_t GetAllocPageErrorCode();
-
-} // namespace base
-
-#endif // BASE_ALLOCATOR_PARTITION_ALLOCATOR_PAGE_ALLOCATOR_H
diff --git a/base/allocator/partition_allocator/page_allocator_constants.h b/base/allocator/partition_allocator/page_allocator_constants.h
deleted file mode 100644
index a2a2003..0000000
--- a/base/allocator/partition_allocator/page_allocator_constants.h
+++ /dev/null
@@ -1,42 +0,0 @@
-// Copyright (c) 2018 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#ifndef BASE_ALLOCATOR_PARTITION_ALLOCATOR_PAGE_ALLOCATOR_CONSTANTS_H_
-#define BASE_ALLOCATOR_PARTITION_ALLOCATOR_PAGE_ALLOCATOR_CONSTANTS_H_
-
-#include <stddef.h>
-
-#include "build_config.h"
-
-namespace base {
-#if defined(OS_WIN)
-static constexpr size_t kPageAllocationGranularityShift = 16; // 64KB
-#elif defined(_MIPS_ARCH_LOONGSON)
-static constexpr size_t kPageAllocationGranularityShift = 14; // 16KB
-#else
-static constexpr size_t kPageAllocationGranularityShift = 12; // 4KB
-#endif
-static constexpr size_t kPageAllocationGranularity =
- 1 << kPageAllocationGranularityShift;
-static constexpr size_t kPageAllocationGranularityOffsetMask =
- kPageAllocationGranularity - 1;
-static constexpr size_t kPageAllocationGranularityBaseMask =
- ~kPageAllocationGranularityOffsetMask;
-
-#if defined(_MIPS_ARCH_LOONGSON)
-static constexpr size_t kSystemPageSize = 16384;
-#else
-static constexpr size_t kSystemPageSize = 4096;
-#endif
-static constexpr size_t kSystemPageOffsetMask = kSystemPageSize - 1;
-static_assert((kSystemPageSize & (kSystemPageSize - 1)) == 0,
- "kSystemPageSize must be power of 2");
-static constexpr size_t kSystemPageBaseMask = ~kSystemPageOffsetMask;
-
-static constexpr size_t kPageMetadataShift = 5; // 32 bytes per partition page.
-static constexpr size_t kPageMetadataSize = 1 << kPageMetadataShift;
-
-} // namespace base
-
-#endif // BASE_ALLOCATOR_PARTITION_ALLOCATOR_PAGE_ALLOCATOR_CONSTANTS_H
diff --git a/base/allocator/partition_allocator/page_allocator_internal.h b/base/allocator/partition_allocator/page_allocator_internal.h
deleted file mode 100644
index c8c003d..0000000
--- a/base/allocator/partition_allocator/page_allocator_internal.h
+++ /dev/null
@@ -1,18 +0,0 @@
-// Copyright (c) 2018 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#ifndef BASE_ALLOCATOR_PARTITION_ALLOCATOR_PAGE_ALLOCATOR_INTERNAL_H_
-#define BASE_ALLOCATOR_PARTITION_ALLOCATOR_PAGE_ALLOCATOR_INTERNAL_H_
-
-namespace base {
-
-void* SystemAllocPages(void* hint,
- size_t length,
- PageAccessibilityConfiguration accessibility,
- PageTag page_tag,
- bool commit);
-
-} // namespace base
-
-#endif // BASE_ALLOCATOR_PARTITION_ALLOCATOR_PAGE_ALLOCATOR_INTERNAL_H_
diff --git a/base/allocator/partition_allocator/page_allocator_internals_posix.h b/base/allocator/partition_allocator/page_allocator_internals_posix.h
deleted file mode 100644
index baadbdc..0000000
--- a/base/allocator/partition_allocator/page_allocator_internals_posix.h
+++ /dev/null
@@ -1,183 +0,0 @@
-// Copyright (c) 2018 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#ifndef BASE_ALLOCATOR_PARTITION_ALLOCATOR_PAGE_ALLOCATOR_INTERNALS_POSIX_H_
-#define BASE_ALLOCATOR_PARTITION_ALLOCATOR_PAGE_ALLOCATOR_INTERNALS_POSIX_H_
-
-#include <errno.h>
-#include <sys/mman.h>
-
-#if defined(OS_MACOSX)
-#include <mach/mach.h>
-#endif
-#if defined(OS_LINUX)
-#include <sys/resource.h>
-#endif
-
-#include "build_config.h"
-
-#ifndef MAP_ANONYMOUS
-#define MAP_ANONYMOUS MAP_ANON
-#endif
-
-namespace base {
-
-// |mmap| uses a nearby address if the hint address is blocked.
-const bool kHintIsAdvisory = true;
-std::atomic<int32_t> s_allocPageErrorCode{0};
-
-int GetAccessFlags(PageAccessibilityConfiguration accessibility) {
- switch (accessibility) {
- case PageRead:
- return PROT_READ;
- case PageReadWrite:
- return PROT_READ | PROT_WRITE;
- case PageReadExecute:
- return PROT_READ | PROT_EXEC;
- case PageReadWriteExecute:
- return PROT_READ | PROT_WRITE | PROT_EXEC;
- default:
- NOTREACHED();
- FALLTHROUGH;
- case PageInaccessible:
- return PROT_NONE;
- }
-}
-
-#if defined(OS_LINUX) && defined(ARCH_CPU_64_BITS)
-
-// Multiple guarded memory regions may exceed the process address space limit.
-// This function will raise or lower the limit by |amount|.
-bool AdjustAddressSpaceLimit(int64_t amount) {
- struct rlimit old_rlimit;
- if (getrlimit(RLIMIT_AS, &old_rlimit))
- return false;
- const rlim_t new_limit =
- CheckAdd(old_rlimit.rlim_cur, amount).ValueOrDefault(old_rlimit.rlim_max);
- const struct rlimit new_rlimit = {std::min(new_limit, old_rlimit.rlim_max),
- old_rlimit.rlim_max};
- // setrlimit will fail if limit > old_rlimit.rlim_max.
- return setrlimit(RLIMIT_AS, &new_rlimit) == 0;
-}
-
-// Current WASM guarded memory regions have 8 GiB of address space. There are
-// schemes that reduce that to 4 GiB.
-constexpr size_t kMinimumGuardedMemorySize = 1ULL << 32; // 4 GiB
-
-#endif // defined(OS_LINUX) && defined(ARCH_CPU_64_BITS)
-
-void* SystemAllocPagesInternal(void* hint,
- size_t length,
- PageAccessibilityConfiguration accessibility,
- PageTag page_tag,
- bool commit) {
-#if defined(OS_MACOSX)
- // Use a custom tag to make it easier to distinguish Partition Alloc regions
- // in vmmap(1). Tags between 240-255 are supported.
- DCHECK_LE(PageTag::kFirst, page_tag);
- DCHECK_GE(PageTag::kLast, page_tag);
- int fd = VM_MAKE_TAG(static_cast<int>(page_tag));
-#else
- int fd = -1;
-#endif
-
- int access_flag = GetAccessFlags(accessibility);
- void* ret =
- mmap(hint, length, access_flag, MAP_ANONYMOUS | MAP_PRIVATE, fd, 0);
- if (ret == MAP_FAILED) {
- s_allocPageErrorCode = errno;
- ret = nullptr;
- }
- return ret;
-}
-
-void* TrimMappingInternal(void* base,
- size_t base_length,
- size_t trim_length,
- PageAccessibilityConfiguration accessibility,
- bool commit,
- size_t pre_slack,
- size_t post_slack) {
- void* ret = base;
- // We can resize the allocation run. Release unneeded memory before and after
- // the aligned range.
- if (pre_slack) {
- int res = munmap(base, pre_slack);
- CHECK(!res);
- ret = reinterpret_cast<char*>(base) + pre_slack;
- }
- if (post_slack) {
- int res = munmap(reinterpret_cast<char*>(ret) + trim_length, post_slack);
- CHECK(!res);
- }
- return ret;
-}
-
-bool SetSystemPagesAccessInternal(
- void* address,
- size_t length,
- PageAccessibilityConfiguration accessibility) {
- return 0 == mprotect(address, length, GetAccessFlags(accessibility));
-}
-
-void FreePagesInternal(void* address, size_t length) {
- CHECK(!munmap(address, length));
-
-#if defined(OS_LINUX) && defined(ARCH_CPU_64_BITS)
- // Restore the address space limit.
- if (length >= kMinimumGuardedMemorySize) {
- CHECK(AdjustAddressSpaceLimit(-base::checked_cast<int64_t>(length)));
- }
-#endif
-}
-
-void DecommitSystemPagesInternal(void* address, size_t length) {
- // In POSIX, there is no decommit concept. Discarding is an effective way of
- // implementing the Windows semantics where the OS is allowed to not swap the
- // pages in the region.
- //
- // TODO(ajwong): Also explore setting PageInaccessible to make the protection
- // semantics consistent between Windows and POSIX. This might have a perf cost
- // though as both decommit and recommit would incur an extra syscall.
- // http://crbug.com/766882
- DiscardSystemPages(address, length);
-}
-
-bool RecommitSystemPagesInternal(void* address,
- size_t length,
- PageAccessibilityConfiguration accessibility) {
-#if defined(OS_MACOSX)
- // On macOS, to update accounting, we need to make another syscall. For more
- // details, see https://crbug.com/823915.
- madvise(address, length, MADV_FREE_REUSE);
-#endif
-
- // On POSIX systems, the caller need simply read the memory to recommit it.
- // This has the correct behavior because the API requires the permissions to
- // be the same as before decommitting and all configurations can read.
- return true;
-}
-
-void DiscardSystemPagesInternal(void* address, size_t length) {
-#if defined(OS_MACOSX)
- int ret = madvise(address, length, MADV_FREE_REUSABLE);
- if (ret) {
- // MADV_FREE_REUSABLE sometimes fails, so fall back to MADV_DONTNEED.
- ret = madvise(address, length, MADV_DONTNEED);
- }
- CHECK(0 == ret);
-#else
- // We have experimented with other flags, but with suboptimal results.
- //
- // MADV_FREE (Linux): Makes our memory measurements less predictable;
- // performance benefits unclear.
- //
- // Therefore, we just do the simple thing: MADV_DONTNEED.
- CHECK(!madvise(address, length, MADV_DONTNEED));
-#endif
-}
-
-} // namespace base
-
-#endif // BASE_ALLOCATOR_PARTITION_ALLOCATOR_PAGE_ALLOCATOR_INTERNALS_POSIX_H_
diff --git a/base/allocator/partition_allocator/page_allocator_internals_win.h b/base/allocator/partition_allocator/page_allocator_internals_win.h
deleted file mode 100644
index 1b6adb2..0000000
--- a/base/allocator/partition_allocator/page_allocator_internals_win.h
+++ /dev/null
@@ -1,121 +0,0 @@
-// Copyright (c) 2018 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#ifndef BASE_ALLOCATOR_PARTITION_ALLOCATOR_PAGE_ALLOCATOR_INTERNALS_WIN_H_
-#define BASE_ALLOCATOR_PARTITION_ALLOCATOR_PAGE_ALLOCATOR_INTERNALS_WIN_H_
-
-#include "base/allocator/partition_allocator/page_allocator_internal.h"
-
-namespace base {
-
-// |VirtualAlloc| will fail if allocation at the hint address is blocked.
-const bool kHintIsAdvisory = false;
-std::atomic<int32_t> s_allocPageErrorCode{ERROR_SUCCESS};
-
-int GetAccessFlags(PageAccessibilityConfiguration accessibility) {
- switch (accessibility) {
- case PageRead:
- return PAGE_READONLY;
- case PageReadWrite:
- return PAGE_READWRITE;
- case PageReadExecute:
- return PAGE_EXECUTE_READ;
- case PageReadWriteExecute:
- return PAGE_EXECUTE_READWRITE;
- default:
- NOTREACHED();
- FALLTHROUGH;
- case PageInaccessible:
- return PAGE_NOACCESS;
- }
-}
-
-void* SystemAllocPagesInternal(void* hint,
- size_t length,
- PageAccessibilityConfiguration accessibility,
- PageTag page_tag,
- bool commit) {
- DWORD access_flag = GetAccessFlags(accessibility);
- const DWORD type_flags = commit ? (MEM_RESERVE | MEM_COMMIT) : MEM_RESERVE;
- void* ret = VirtualAlloc(hint, length, type_flags, access_flag);
- if (ret == nullptr) {
- s_allocPageErrorCode = GetLastError();
- }
- return ret;
-}
-
-void* TrimMappingInternal(void* base,
- size_t base_length,
- size_t trim_length,
- PageAccessibilityConfiguration accessibility,
- bool commit,
- size_t pre_slack,
- size_t post_slack) {
- void* ret = base;
- if (pre_slack || post_slack) {
- // We cannot resize the allocation run. Free it and retry at the aligned
- // address within the freed range.
- ret = reinterpret_cast<char*>(base) + pre_slack;
- FreePages(base, base_length);
- ret = SystemAllocPages(ret, trim_length, accessibility, PageTag::kChromium,
- commit);
- }
- return ret;
-}
-
-bool SetSystemPagesAccessInternal(
- void* address,
- size_t length,
- PageAccessibilityConfiguration accessibility) {
- if (accessibility == PageInaccessible) {
- return VirtualFree(address, length, MEM_DECOMMIT) != 0;
- } else {
- return nullptr != VirtualAlloc(address, length, MEM_COMMIT,
- GetAccessFlags(accessibility));
- }
-}
-
-void FreePagesInternal(void* address, size_t length) {
- CHECK(VirtualFree(address, 0, MEM_RELEASE));
-}
-
-void DecommitSystemPagesInternal(void* address, size_t length) {
- CHECK(SetSystemPagesAccess(address, length, PageInaccessible));
-}
-
-bool RecommitSystemPagesInternal(void* address,
- size_t length,
- PageAccessibilityConfiguration accessibility) {
- return SetSystemPagesAccess(address, length, accessibility);
-}
-
-void DiscardSystemPagesInternal(void* address, size_t length) {
- // On Windows, discarded pages are not returned to the system immediately and
- // not guaranteed to be zeroed when returned to the application.
- using DiscardVirtualMemoryFunction =
- DWORD(WINAPI*)(PVOID virtualAddress, SIZE_T size);
- static DiscardVirtualMemoryFunction discard_virtual_memory =
- reinterpret_cast<DiscardVirtualMemoryFunction>(-1);
- if (discard_virtual_memory ==
- reinterpret_cast<DiscardVirtualMemoryFunction>(-1))
- discard_virtual_memory =
- reinterpret_cast<DiscardVirtualMemoryFunction>(GetProcAddress(
- GetModuleHandle(L"Kernel32.dll"), "DiscardVirtualMemory"));
- // Use DiscardVirtualMemory when available because it releases faster than
- // MEM_RESET.
- DWORD ret = 1;
- if (discard_virtual_memory) {
- ret = discard_virtual_memory(address, length);
- }
- // DiscardVirtualMemory is buggy in Win10 SP0, so fall back to MEM_RESET on
- // failure.
- if (ret) {
- void* ptr = VirtualAlloc(address, length, MEM_RESET, PAGE_READWRITE);
- CHECK(ptr);
- }
-}
-
-} // namespace base
-
-#endif // BASE_ALLOCATOR_PARTITION_ALLOCATOR_PAGE_ALLOCATOR_INTERNALS_WIN_H_
diff --git a/base/allocator/partition_allocator/partition_alloc.cc b/base/allocator/partition_allocator/partition_alloc.cc
deleted file mode 100644
index 8554673..0000000
--- a/base/allocator/partition_allocator/partition_alloc.cc
+++ /dev/null
@@ -1,727 +0,0 @@
-// Copyright (c) 2013 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#include "base/allocator/partition_allocator/partition_alloc.h"
-
-#include <string.h>
-#include <type_traits>
-
-#include "base/allocator/partition_allocator/partition_direct_map_extent.h"
-#include "base/allocator/partition_allocator/partition_oom.h"
-#include "base/allocator/partition_allocator/partition_page.h"
-#include "base/allocator/partition_allocator/spin_lock.h"
-#include "base/compiler_specific.h"
-#include "base/lazy_instance.h"
-
-// Two partition pages are used as guard / metadata page so make sure the super
-// page size is bigger.
-static_assert(base::kPartitionPageSize * 4 <= base::kSuperPageSize,
- "ok super page size");
-static_assert(!(base::kSuperPageSize % base::kPartitionPageSize),
- "ok super page multiple");
-// Four system pages gives us room to hack out a still-guard-paged piece
-// of metadata in the middle of a guard partition page.
-static_assert(base::kSystemPageSize * 4 <= base::kPartitionPageSize,
- "ok partition page size");
-static_assert(!(base::kPartitionPageSize % base::kSystemPageSize),
- "ok partition page multiple");
-static_assert(sizeof(base::internal::PartitionPage) <= base::kPageMetadataSize,
- "PartitionPage should not be too big");
-static_assert(sizeof(base::internal::PartitionBucket) <=
- base::kPageMetadataSize,
- "PartitionBucket should not be too big");
-static_assert(sizeof(base::internal::PartitionSuperPageExtentEntry) <=
- base::kPageMetadataSize,
- "PartitionSuperPageExtentEntry should not be too big");
-static_assert(base::kPageMetadataSize * base::kNumPartitionPagesPerSuperPage <=
- base::kSystemPageSize,
- "page metadata fits in hole");
-// Limit to prevent callers accidentally overflowing an int size.
-static_assert(base::kGenericMaxDirectMapped <=
- (1UL << 31) + base::kPageAllocationGranularity,
- "maximum direct mapped allocation");
-// Check that some of our zanier calculations worked out as expected.
-static_assert(base::kGenericSmallestBucket == 8, "generic smallest bucket");
-static_assert(base::kGenericMaxBucketed == 983040, "generic max bucketed");
-static_assert(base::kMaxSystemPagesPerSlotSpan < (1 << 8),
- "System pages per slot span must be less than 128.");
-
-namespace base {
-
-internal::PartitionRootBase::PartitionRootBase() = default;
-internal::PartitionRootBase::~PartitionRootBase() = default;
-PartitionRoot::PartitionRoot() = default;
-PartitionRoot::~PartitionRoot() = default;
-PartitionRootGeneric::PartitionRootGeneric() = default;
-PartitionRootGeneric::~PartitionRootGeneric() = default;
-PartitionAllocatorGeneric::PartitionAllocatorGeneric() = default;
-PartitionAllocatorGeneric::~PartitionAllocatorGeneric() = default;
-
-static LazyInstance<subtle::SpinLock>::Leaky g_initialized_lock =
- LAZY_INSTANCE_INITIALIZER;
-static bool g_initialized = false;
-
-void (*internal::PartitionRootBase::gOomHandlingFunction)() = nullptr;
-PartitionAllocHooks::AllocationHook* PartitionAllocHooks::allocation_hook_ =
- nullptr;
-PartitionAllocHooks::FreeHook* PartitionAllocHooks::free_hook_ = nullptr;
-
-static void PartitionAllocBaseInit(internal::PartitionRootBase* root) {
- DCHECK(!root->initialized);
- {
- subtle::SpinLock::Guard guard(g_initialized_lock.Get());
- if (!g_initialized) {
- g_initialized = true;
- // We mark the sentinel bucket/page as free to make sure it is skipped by
- // our logic to find a new active page.
- internal::PartitionBucket::get_sentinel_bucket()->active_pages_head =
- internal::PartitionPage::get_sentinel_page();
- }
- }
-
- root->initialized = true;
-
- // This is a "magic" value so we can test if a root pointer is valid.
- root->inverted_self = ~reinterpret_cast<uintptr_t>(root);
-}
-
-void PartitionAllocGlobalInit(void (*oom_handling_function)()) {
- DCHECK(oom_handling_function);
- internal::PartitionRootBase::gOomHandlingFunction = oom_handling_function;
-}
-
-void PartitionRoot::Init(size_t num_buckets, size_t max_allocation) {
- PartitionAllocBaseInit(this);
-
- this->num_buckets = num_buckets;
- this->max_allocation = max_allocation;
- size_t i;
- for (i = 0; i < this->num_buckets; ++i) {
- internal::PartitionBucket* bucket = &this->buckets()[i];
- if (!i)
- bucket->Init(kAllocationGranularity);
- else
- bucket->Init(i << kBucketShift);
- }
-}
-
-void PartitionRootGeneric::Init() {
- subtle::SpinLock::Guard guard(this->lock);
-
- PartitionAllocBaseInit(this);
-
- // Precalculate some shift and mask constants used in the hot path.
- // Example: malloc(41) == 101001 binary.
- // Order is 6 (1 << 6-1) == 32 is highest bit set.
- // order_index is the next three MSB == 010 == 2.
- // sub_order_index_mask is a mask for the remaining bits == 11 (masking to 01
- // for
- // the sub_order_index).
- size_t order;
- for (order = 0; order <= kBitsPerSizeT; ++order) {
- size_t order_index_shift;
- if (order < kGenericNumBucketsPerOrderBits + 1)
- order_index_shift = 0;
- else
- order_index_shift = order - (kGenericNumBucketsPerOrderBits + 1);
- this->order_index_shifts[order] = order_index_shift;
- size_t sub_order_index_mask;
- if (order == kBitsPerSizeT) {
- // This avoids invoking undefined behavior for an excessive shift.
- sub_order_index_mask =
- static_cast<size_t>(-1) >> (kGenericNumBucketsPerOrderBits + 1);
- } else {
- sub_order_index_mask = ((static_cast<size_t>(1) << order) - 1) >>
- (kGenericNumBucketsPerOrderBits + 1);
- }
- this->order_sub_index_masks[order] = sub_order_index_mask;
- }
-
- // Set up the actual usable buckets first.
- // Note that typical values (i.e. min allocation size of 8) will result in
- // pseudo buckets (size==9 etc. or more generally, size is not a multiple
- // of the smallest allocation granularity).
- // We avoid them in the bucket lookup map, but we tolerate them to keep the
- // code simpler and the structures more generic.
- size_t i, j;
- size_t current_size = kGenericSmallestBucket;
- size_t currentIncrement =
- kGenericSmallestBucket >> kGenericNumBucketsPerOrderBits;
- internal::PartitionBucket* bucket = &this->buckets[0];
- for (i = 0; i < kGenericNumBucketedOrders; ++i) {
- for (j = 0; j < kGenericNumBucketsPerOrder; ++j) {
- bucket->Init(current_size);
- // Disable psuedo buckets so that touching them faults.
- if (current_size % kGenericSmallestBucket)
- bucket->active_pages_head = nullptr;
- current_size += currentIncrement;
- ++bucket;
- }
- currentIncrement <<= 1;
- }
- DCHECK(current_size == 1 << kGenericMaxBucketedOrder);
- DCHECK(bucket == &this->buckets[0] + kGenericNumBuckets);
-
- // Then set up the fast size -> bucket lookup table.
- bucket = &this->buckets[0];
- internal::PartitionBucket** bucketPtr = &this->bucket_lookups[0];
- for (order = 0; order <= kBitsPerSizeT; ++order) {
- for (j = 0; j < kGenericNumBucketsPerOrder; ++j) {
- if (order < kGenericMinBucketedOrder) {
- // Use the bucket of the finest granularity for malloc(0) etc.
- *bucketPtr++ = &this->buckets[0];
- } else if (order > kGenericMaxBucketedOrder) {
- *bucketPtr++ = internal::PartitionBucket::get_sentinel_bucket();
- } else {
- internal::PartitionBucket* validBucket = bucket;
- // Skip over invalid buckets.
- while (validBucket->slot_size % kGenericSmallestBucket)
- validBucket++;
- *bucketPtr++ = validBucket;
- bucket++;
- }
- }
- }
- DCHECK(bucket == &this->buckets[0] + kGenericNumBuckets);
- DCHECK(bucketPtr == &this->bucket_lookups[0] +
- ((kBitsPerSizeT + 1) * kGenericNumBucketsPerOrder));
- // And there's one last bucket lookup that will be hit for e.g. malloc(-1),
- // which tries to overflow to a non-existant order.
- *bucketPtr = internal::PartitionBucket::get_sentinel_bucket();
-}
-
-bool PartitionReallocDirectMappedInPlace(PartitionRootGeneric* root,
- internal::PartitionPage* page,
- size_t raw_size) {
- DCHECK(page->bucket->is_direct_mapped());
-
- raw_size = internal::PartitionCookieSizeAdjustAdd(raw_size);
-
- // Note that the new size might be a bucketed size; this function is called
- // whenever we're reallocating a direct mapped allocation.
- size_t new_size = internal::PartitionBucket::get_direct_map_size(raw_size);
- if (new_size < kGenericMinDirectMappedDownsize)
- return false;
-
- // bucket->slot_size is the current size of the allocation.
- size_t current_size = page->bucket->slot_size;
- if (new_size == current_size)
- return true;
-
- char* char_ptr = static_cast<char*>(internal::PartitionPage::ToPointer(page));
-
- if (new_size < current_size) {
- size_t map_size =
- internal::PartitionDirectMapExtent::FromPage(page)->map_size;
-
- // Don't reallocate in-place if new size is less than 80 % of the full
- // map size, to avoid holding on to too much unused address space.
- if ((new_size / kSystemPageSize) * 5 < (map_size / kSystemPageSize) * 4)
- return false;
-
- // Shrink by decommitting unneeded pages and making them inaccessible.
- size_t decommitSize = current_size - new_size;
- root->DecommitSystemPages(char_ptr + new_size, decommitSize);
- CHECK(SetSystemPagesAccess(char_ptr + new_size, decommitSize,
- PageInaccessible));
- } else if (new_size <=
- internal::PartitionDirectMapExtent::FromPage(page)->map_size) {
- // Grow within the actually allocated memory. Just need to make the
- // pages accessible again.
- size_t recommit_size = new_size - current_size;
- CHECK(SetSystemPagesAccess(char_ptr + current_size, recommit_size,
- PageReadWrite));
- root->RecommitSystemPages(char_ptr + current_size, recommit_size);
-
-#if DCHECK_IS_ON()
- memset(char_ptr + current_size, internal::kUninitializedByte,
- recommit_size);
-#endif
- } else {
- // We can't perform the realloc in-place.
- // TODO: support this too when possible.
- return false;
- }
-
-#if DCHECK_IS_ON()
- // Write a new trailing cookie.
- internal::PartitionCookieWriteValue(char_ptr + raw_size -
- internal::kCookieSize);
-#endif
-
- page->set_raw_size(raw_size);
- DCHECK(page->get_raw_size() == raw_size);
-
- page->bucket->slot_size = new_size;
- return true;
-}
-
-void* PartitionReallocGenericFlags(PartitionRootGeneric* root,
- int flags,
- void* ptr,
- size_t new_size,
- const char* type_name) {
-#if defined(MEMORY_TOOL_REPLACES_ALLOCATOR)
- void* result = realloc(ptr, new_size);
- CHECK(result || flags & PartitionAllocReturnNull);
- return result;
-#else
- if (UNLIKELY(!ptr))
- return PartitionAllocGenericFlags(root, flags, new_size, type_name);
- if (UNLIKELY(!new_size)) {
- root->Free(ptr);
- return nullptr;
- }
-
- if (new_size > kGenericMaxDirectMapped) {
- if (flags & PartitionAllocReturnNull)
- return nullptr;
- else
- internal::PartitionExcessiveAllocationSize();
- }
-
- internal::PartitionPage* page = internal::PartitionPage::FromPointer(
- internal::PartitionCookieFreePointerAdjust(ptr));
- // TODO(palmer): See if we can afford to make this a CHECK.
- DCHECK(root->IsValidPage(page));
-
- if (UNLIKELY(page->bucket->is_direct_mapped())) {
- // We may be able to perform the realloc in place by changing the
- // accessibility of memory pages and, if reducing the size, decommitting
- // them.
- if (PartitionReallocDirectMappedInPlace(root, page, new_size)) {
- PartitionAllocHooks::ReallocHookIfEnabled(ptr, ptr, new_size, type_name);
- return ptr;
- }
- }
-
- size_t actual_new_size = root->ActualSize(new_size);
- size_t actual_old_size = PartitionAllocGetSize(ptr);
-
- // TODO: note that tcmalloc will "ignore" a downsizing realloc() unless the
- // new size is a significant percentage smaller. We could do the same if we
- // determine it is a win.
- if (actual_new_size == actual_old_size) {
- // Trying to allocate a block of size new_size would give us a block of
- // the same size as the one we've already got, so re-use the allocation
- // after updating statistics (and cookies, if present).
- page->set_raw_size(internal::PartitionCookieSizeAdjustAdd(new_size));
-#if DCHECK_IS_ON()
- // Write a new trailing cookie when it is possible to keep track of
- // |new_size| via the raw size pointer.
- if (page->get_raw_size_ptr())
- internal::PartitionCookieWriteValue(static_cast<char*>(ptr) + new_size);
-#endif
- return ptr;
- }
-
- // This realloc cannot be resized in-place. Sadness.
- void* ret = PartitionAllocGenericFlags(root, flags, new_size, type_name);
- if (!ret) {
- if (flags & PartitionAllocReturnNull)
- return nullptr;
- else
- internal::PartitionExcessiveAllocationSize();
- }
-
- size_t copy_size = actual_old_size;
- if (new_size < copy_size)
- copy_size = new_size;
-
- memcpy(ret, ptr, copy_size);
- root->Free(ptr);
- return ret;
-#endif
-}
-
-void* PartitionRootGeneric::Realloc(void* ptr,
- size_t new_size,
- const char* type_name) {
- return PartitionReallocGenericFlags(this, 0, ptr, new_size, type_name);
-}
-
-static size_t PartitionPurgePage(internal::PartitionPage* page, bool discard) {
- const internal::PartitionBucket* bucket = page->bucket;
- size_t slot_size = bucket->slot_size;
- if (slot_size < kSystemPageSize || !page->num_allocated_slots)
- return 0;
-
- size_t bucket_num_slots = bucket->get_slots_per_span();
- size_t discardable_bytes = 0;
-
- size_t raw_size = page->get_raw_size();
- if (raw_size) {
- uint32_t usedBytes = static_cast<uint32_t>(RoundUpToSystemPage(raw_size));
- discardable_bytes = bucket->slot_size - usedBytes;
- if (discardable_bytes && discard) {
- char* ptr =
- reinterpret_cast<char*>(internal::PartitionPage::ToPointer(page));
- ptr += usedBytes;
- DiscardSystemPages(ptr, discardable_bytes);
- }
- return discardable_bytes;
- }
-
- constexpr size_t kMaxSlotCount =
- (kPartitionPageSize * kMaxPartitionPagesPerSlotSpan) / kSystemPageSize;
- DCHECK(bucket_num_slots <= kMaxSlotCount);
- DCHECK(page->num_unprovisioned_slots < bucket_num_slots);
- size_t num_slots = bucket_num_slots - page->num_unprovisioned_slots;
- char slot_usage[kMaxSlotCount];
-#if !defined(OS_WIN)
- // The last freelist entry should not be discarded when using OS_WIN.
- // DiscardVirtualMemory makes the contents of discarded memory undefined.
- size_t last_slot = static_cast<size_t>(-1);
-#endif
- memset(slot_usage, 1, num_slots);
- char* ptr = reinterpret_cast<char*>(internal::PartitionPage::ToPointer(page));
- // First, walk the freelist for this page and make a bitmap of which slots
- // are not in use.
- for (internal::PartitionFreelistEntry* entry = page->freelist_head; entry;
- /**/) {
- size_t slotIndex = (reinterpret_cast<char*>(entry) - ptr) / slot_size;
- DCHECK(slotIndex < num_slots);
- slot_usage[slotIndex] = 0;
- entry = internal::PartitionFreelistEntry::Transform(entry->next);
-#if !defined(OS_WIN)
- // If we have a slot where the masked freelist entry is 0, we can
- // actually discard that freelist entry because touching a discarded
- // page is guaranteed to return original content or 0.
- // (Note that this optimization won't fire on big endian machines
- // because the masking function is negation.)
- if (!internal::PartitionFreelistEntry::Transform(entry))
- last_slot = slotIndex;
-#endif
- }
-
- // If the slot(s) at the end of the slot span are not in used, we can
- // truncate them entirely and rewrite the freelist.
- size_t truncated_slots = 0;
- while (!slot_usage[num_slots - 1]) {
- truncated_slots++;
- num_slots--;
- DCHECK(num_slots);
- }
- // First, do the work of calculating the discardable bytes. Don't actually
- // discard anything unless the discard flag was passed in.
- if (truncated_slots) {
- size_t unprovisioned_bytes = 0;
- char* begin_ptr = ptr + (num_slots * slot_size);
- char* end_ptr = begin_ptr + (slot_size * truncated_slots);
- begin_ptr = reinterpret_cast<char*>(
- RoundUpToSystemPage(reinterpret_cast<size_t>(begin_ptr)));
- // We round the end pointer here up and not down because we're at the
- // end of a slot span, so we "own" all the way up the page boundary.
- end_ptr = reinterpret_cast<char*>(
- RoundUpToSystemPage(reinterpret_cast<size_t>(end_ptr)));
- DCHECK(end_ptr <= ptr + bucket->get_bytes_per_span());
- if (begin_ptr < end_ptr) {
- unprovisioned_bytes = end_ptr - begin_ptr;
- discardable_bytes += unprovisioned_bytes;
- }
- if (unprovisioned_bytes && discard) {
- DCHECK(truncated_slots > 0);
- size_t num_new_entries = 0;
- page->num_unprovisioned_slots += static_cast<uint16_t>(truncated_slots);
- // Rewrite the freelist.
- internal::PartitionFreelistEntry** entry_ptr = &page->freelist_head;
- for (size_t slotIndex = 0; slotIndex < num_slots; ++slotIndex) {
- if (slot_usage[slotIndex])
- continue;
- auto* entry = reinterpret_cast<internal::PartitionFreelistEntry*>(
- ptr + (slot_size * slotIndex));
- *entry_ptr = internal::PartitionFreelistEntry::Transform(entry);
- entry_ptr = reinterpret_cast<internal::PartitionFreelistEntry**>(entry);
- num_new_entries++;
-#if !defined(OS_WIN)
- last_slot = slotIndex;
-#endif
- }
- // Terminate the freelist chain.
- *entry_ptr = nullptr;
- // The freelist head is stored unmasked.
- page->freelist_head =
- internal::PartitionFreelistEntry::Transform(page->freelist_head);
- DCHECK(num_new_entries == num_slots - page->num_allocated_slots);
- // Discard the memory.
- DiscardSystemPages(begin_ptr, unprovisioned_bytes);
- }
- }
-
- // Next, walk the slots and for any not in use, consider where the system
- // page boundaries occur. We can release any system pages back to the
- // system as long as we don't interfere with a freelist pointer or an
- // adjacent slot.
- for (size_t i = 0; i < num_slots; ++i) {
- if (slot_usage[i])
- continue;
- // The first address we can safely discard is just after the freelist
- // pointer. There's one quirk: if the freelist pointer is actually a
- // null, we can discard that pointer value too.
- char* begin_ptr = ptr + (i * slot_size);
- char* end_ptr = begin_ptr + slot_size;
-#if !defined(OS_WIN)
- if (i != last_slot)
- begin_ptr += sizeof(internal::PartitionFreelistEntry);
-#else
- begin_ptr += sizeof(internal::PartitionFreelistEntry);
-#endif
- begin_ptr = reinterpret_cast<char*>(
- RoundUpToSystemPage(reinterpret_cast<size_t>(begin_ptr)));
- end_ptr = reinterpret_cast<char*>(
- RoundDownToSystemPage(reinterpret_cast<size_t>(end_ptr)));
- if (begin_ptr < end_ptr) {
- size_t partial_slot_bytes = end_ptr - begin_ptr;
- discardable_bytes += partial_slot_bytes;
- if (discard)
- DiscardSystemPages(begin_ptr, partial_slot_bytes);
- }
- }
- return discardable_bytes;
-}
-
-static void PartitionPurgeBucket(internal::PartitionBucket* bucket) {
- if (bucket->active_pages_head !=
- internal::PartitionPage::get_sentinel_page()) {
- for (internal::PartitionPage* page = bucket->active_pages_head; page;
- page = page->next_page) {
- DCHECK(page != internal::PartitionPage::get_sentinel_page());
- PartitionPurgePage(page, true);
- }
- }
-}
-
-void PartitionRoot::PurgeMemory(int flags) {
- if (flags & PartitionPurgeDecommitEmptyPages)
- DecommitEmptyPages();
- // We don't currently do anything for PartitionPurgeDiscardUnusedSystemPages
- // here because that flag is only useful for allocations >= system page
- // size. We only have allocations that large inside generic partitions
- // at the moment.
-}
-
-void PartitionRootGeneric::PurgeMemory(int flags) {
- subtle::SpinLock::Guard guard(this->lock);
- if (flags & PartitionPurgeDecommitEmptyPages)
- DecommitEmptyPages();
- if (flags & PartitionPurgeDiscardUnusedSystemPages) {
- for (size_t i = 0; i < kGenericNumBuckets; ++i) {
- internal::PartitionBucket* bucket = &this->buckets[i];
- if (bucket->slot_size >= kSystemPageSize)
- PartitionPurgeBucket(bucket);
- }
- }
-}
-
-static void PartitionDumpPageStats(PartitionBucketMemoryStats* stats_out,
- internal::PartitionPage* page) {
- uint16_t bucket_num_slots = page->bucket->get_slots_per_span();
-
- if (page->is_decommitted()) {
- ++stats_out->num_decommitted_pages;
- return;
- }
-
- stats_out->discardable_bytes += PartitionPurgePage(page, false);
-
- size_t raw_size = page->get_raw_size();
- if (raw_size) {
- stats_out->active_bytes += static_cast<uint32_t>(raw_size);
- } else {
- stats_out->active_bytes +=
- (page->num_allocated_slots * stats_out->bucket_slot_size);
- }
-
- size_t page_bytes_resident =
- RoundUpToSystemPage((bucket_num_slots - page->num_unprovisioned_slots) *
- stats_out->bucket_slot_size);
- stats_out->resident_bytes += page_bytes_resident;
- if (page->is_empty()) {
- stats_out->decommittable_bytes += page_bytes_resident;
- ++stats_out->num_empty_pages;
- } else if (page->is_full()) {
- ++stats_out->num_full_pages;
- } else {
- DCHECK(page->is_active());
- ++stats_out->num_active_pages;
- }
-}
-
-static void PartitionDumpBucketStats(PartitionBucketMemoryStats* stats_out,
- const internal::PartitionBucket* bucket) {
- DCHECK(!bucket->is_direct_mapped());
- stats_out->is_valid = false;
- // If the active page list is empty (==
- // internal::PartitionPage::get_sentinel_page()),
- // the bucket might still need to be reported if it has a list of empty,
- // decommitted or full pages.
- if (bucket->active_pages_head ==
- internal::PartitionPage::get_sentinel_page() &&
- !bucket->empty_pages_head && !bucket->decommitted_pages_head &&
- !bucket->num_full_pages)
- return;
-
- memset(stats_out, '\0', sizeof(*stats_out));
- stats_out->is_valid = true;
- stats_out->is_direct_map = false;
- stats_out->num_full_pages = static_cast<size_t>(bucket->num_full_pages);
- stats_out->bucket_slot_size = bucket->slot_size;
- uint16_t bucket_num_slots = bucket->get_slots_per_span();
- size_t bucket_useful_storage = stats_out->bucket_slot_size * bucket_num_slots;
- stats_out->allocated_page_size = bucket->get_bytes_per_span();
- stats_out->active_bytes = bucket->num_full_pages * bucket_useful_storage;
- stats_out->resident_bytes =
- bucket->num_full_pages * stats_out->allocated_page_size;
-
- for (internal::PartitionPage* page = bucket->empty_pages_head; page;
- page = page->next_page) {
- DCHECK(page->is_empty() || page->is_decommitted());
- PartitionDumpPageStats(stats_out, page);
- }
- for (internal::PartitionPage* page = bucket->decommitted_pages_head; page;
- page = page->next_page) {
- DCHECK(page->is_decommitted());
- PartitionDumpPageStats(stats_out, page);
- }
-
- if (bucket->active_pages_head !=
- internal::PartitionPage::get_sentinel_page()) {
- for (internal::PartitionPage* page = bucket->active_pages_head; page;
- page = page->next_page) {
- DCHECK(page != internal::PartitionPage::get_sentinel_page());
- PartitionDumpPageStats(stats_out, page);
- }
- }
-}
-
-void PartitionRootGeneric::DumpStats(const char* partition_name,
- bool is_light_dump,
- PartitionStatsDumper* dumper) {
- PartitionMemoryStats stats = {0};
- stats.total_mmapped_bytes =
- this->total_size_of_super_pages + this->total_size_of_direct_mapped_pages;
- stats.total_committed_bytes = this->total_size_of_committed_pages;
-
- size_t direct_mapped_allocations_total_size = 0;
-
- static const size_t kMaxReportableDirectMaps = 4096;
-
- // Allocate on the heap rather than on the stack to avoid stack overflow
- // skirmishes (on Windows, in particular).
- std::unique_ptr<uint32_t[]> direct_map_lengths = nullptr;
- if (!is_light_dump) {
- direct_map_lengths =
- std::unique_ptr<uint32_t[]>(new uint32_t[kMaxReportableDirectMaps]);
- }
-
- PartitionBucketMemoryStats bucket_stats[kGenericNumBuckets];
- size_t num_direct_mapped_allocations = 0;
- {
- subtle::SpinLock::Guard guard(this->lock);
-
- for (size_t i = 0; i < kGenericNumBuckets; ++i) {
- const internal::PartitionBucket* bucket = &this->buckets[i];
- // Don't report the pseudo buckets that the generic allocator sets up in
- // order to preserve a fast size->bucket map (see
- // PartitionRootGeneric::Init() for details).
- if (!bucket->active_pages_head)
- bucket_stats[i].is_valid = false;
- else
- PartitionDumpBucketStats(&bucket_stats[i], bucket);
- if (bucket_stats[i].is_valid) {
- stats.total_resident_bytes += bucket_stats[i].resident_bytes;
- stats.total_active_bytes += bucket_stats[i].active_bytes;
- stats.total_decommittable_bytes += bucket_stats[i].decommittable_bytes;
- stats.total_discardable_bytes += bucket_stats[i].discardable_bytes;
- }
- }
-
- for (internal::PartitionDirectMapExtent *extent = this->direct_map_list;
- extent && num_direct_mapped_allocations < kMaxReportableDirectMaps;
- extent = extent->next_extent, ++num_direct_mapped_allocations) {
- DCHECK(!extent->next_extent ||
- extent->next_extent->prev_extent == extent);
- size_t slot_size = extent->bucket->slot_size;
- direct_mapped_allocations_total_size += slot_size;
- if (is_light_dump)
- continue;
- direct_map_lengths[num_direct_mapped_allocations] = slot_size;
- }
- }
-
- if (!is_light_dump) {
- // Call |PartitionsDumpBucketStats| after collecting stats because it can
- // try to allocate using |PartitionRootGeneric::Alloc()| and it can't
- // obtain the lock.
- for (size_t i = 0; i < kGenericNumBuckets; ++i) {
- if (bucket_stats[i].is_valid)
- dumper->PartitionsDumpBucketStats(partition_name, &bucket_stats[i]);
- }
-
- for (size_t i = 0; i < num_direct_mapped_allocations; ++i) {
- uint32_t size = direct_map_lengths[i];
-
- PartitionBucketMemoryStats mapped_stats = {};
- mapped_stats.is_valid = true;
- mapped_stats.is_direct_map = true;
- mapped_stats.num_full_pages = 1;
- mapped_stats.allocated_page_size = size;
- mapped_stats.bucket_slot_size = size;
- mapped_stats.active_bytes = size;
- mapped_stats.resident_bytes = size;
- dumper->PartitionsDumpBucketStats(partition_name, &mapped_stats);
- }
- }
-
- stats.total_resident_bytes += direct_mapped_allocations_total_size;
- stats.total_active_bytes += direct_mapped_allocations_total_size;
- dumper->PartitionDumpTotals(partition_name, &stats);
-}
-
-void PartitionRoot::DumpStats(const char* partition_name,
- bool is_light_dump,
- PartitionStatsDumper* dumper) {
- PartitionMemoryStats stats = {0};
- stats.total_mmapped_bytes = this->total_size_of_super_pages;
- stats.total_committed_bytes = this->total_size_of_committed_pages;
- DCHECK(!this->total_size_of_direct_mapped_pages);
-
- static const size_t kMaxReportableBuckets = 4096 / sizeof(void*);
- std::unique_ptr<PartitionBucketMemoryStats[]> memory_stats;
- if (!is_light_dump)
- memory_stats = std::unique_ptr<PartitionBucketMemoryStats[]>(
- new PartitionBucketMemoryStats[kMaxReportableBuckets]);
-
- const size_t partitionNumBuckets = this->num_buckets;
- DCHECK(partitionNumBuckets <= kMaxReportableBuckets);
-
- for (size_t i = 0; i < partitionNumBuckets; ++i) {
- PartitionBucketMemoryStats bucket_stats = {0};
- PartitionDumpBucketStats(&bucket_stats, &this->buckets()[i]);
- if (bucket_stats.is_valid) {
- stats.total_resident_bytes += bucket_stats.resident_bytes;
- stats.total_active_bytes += bucket_stats.active_bytes;
- stats.total_decommittable_bytes += bucket_stats.decommittable_bytes;
- stats.total_discardable_bytes += bucket_stats.discardable_bytes;
- }
- if (!is_light_dump) {
- if (bucket_stats.is_valid)
- memory_stats[i] = bucket_stats;
- else
- memory_stats[i].is_valid = false;
- }
- }
- if (!is_light_dump) {
- // PartitionsDumpBucketStats is called after collecting stats because it
- // can use PartitionRoot::Alloc() to allocate and this can affect the
- // statistics.
- for (size_t i = 0; i < partitionNumBuckets; ++i) {
- if (memory_stats[i].is_valid)
- dumper->PartitionsDumpBucketStats(partition_name, &memory_stats[i]);
- }
- }
- dumper->PartitionDumpTotals(partition_name, &stats);
-}
-
-} // namespace base
diff --git a/base/allocator/partition_allocator/partition_alloc.h b/base/allocator/partition_allocator/partition_alloc.h
deleted file mode 100644
index c69fd01..0000000
--- a/base/allocator/partition_allocator/partition_alloc.h
+++ /dev/null
@@ -1,438 +0,0 @@
-// Copyright (c) 2013 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#ifndef BASE_ALLOCATOR_PARTITION_ALLOCATOR_PARTITION_ALLOC_H_
-#define BASE_ALLOCATOR_PARTITION_ALLOCATOR_PARTITION_ALLOC_H_
-
-// DESCRIPTION
-// PartitionRoot::Alloc() / PartitionRootGeneric::Alloc() and PartitionFree() /
-// PartitionRootGeneric::Free() are approximately analagous to malloc() and
-// free().
-//
-// The main difference is that a PartitionRoot / PartitionRootGeneric object
-// must be supplied to these functions, representing a specific "heap partition"
-// that will be used to satisfy the allocation. Different partitions are
-// guaranteed to exist in separate address spaces, including being separate from
-// the main system heap. If the contained objects are all freed, physical memory
-// is returned to the system but the address space remains reserved.
-// See PartitionAlloc.md for other security properties PartitionAlloc provides.
-//
-// THE ONLY LEGITIMATE WAY TO OBTAIN A PartitionRoot IS THROUGH THE
-// SizeSpecificPartitionAllocator / PartitionAllocatorGeneric classes. To
-// minimize the instruction count to the fullest extent possible, the
-// PartitionRoot is really just a header adjacent to other data areas provided
-// by the allocator class.
-//
-// The PartitionRoot::Alloc() variant of the API has the following caveats:
-// - Allocations and frees against a single partition must be single threaded.
-// - Allocations must not exceed a max size, chosen at compile-time via a
-// templated parameter to PartitionAllocator.
-// - Allocation sizes must be aligned to the system pointer size.
-// - Allocations are bucketed exactly according to size.
-//
-// And for PartitionRootGeneric::Alloc():
-// - Multi-threaded use against a single partition is ok; locking is handled.
-// - Allocations of any arbitrary size can be handled (subject to a limit of
-// INT_MAX bytes for security reasons).
-// - Bucketing is by approximate size, for example an allocation of 4000 bytes
-// might be placed into a 4096-byte bucket. Bucket sizes are chosen to try and
-// keep worst-case waste to ~10%.
-//
-// The allocators are designed to be extremely fast, thanks to the following
-// properties and design:
-// - Just two single (reasonably predicatable) branches in the hot / fast path
-// for both allocating and (significantly) freeing.
-// - A minimal number of operations in the hot / fast path, with the slow paths
-// in separate functions, leading to the possibility of inlining.
-// - Each partition page (which is usually multiple physical pages) has a
-// metadata structure which allows fast mapping of free() address to an
-// underlying bucket.
-// - Supports a lock-free API for fast performance in single-threaded cases.
-// - The freelist for a given bucket is split across a number of partition
-// pages, enabling various simple tricks to try and minimize fragmentation.
-// - Fine-grained bucket sizes leading to less waste and better packing.
-//
-// The following security properties could be investigated in the future:
-// - Per-object bucketing (instead of per-size) is mostly available at the API,
-// but not used yet.
-// - No randomness of freelist entries or bucket position.
-// - Better checking for wild pointers in free().
-// - Better freelist masking function to guarantee fault on 32-bit.
-
-#include <limits.h>
-#include <string.h>
-
-#include "base/allocator/partition_allocator/page_allocator.h"
-#include "base/allocator/partition_allocator/partition_alloc_constants.h"
-#include "base/allocator/partition_allocator/partition_bucket.h"
-#include "base/allocator/partition_allocator/partition_cookie.h"
-#include "base/allocator/partition_allocator/partition_page.h"
-#include "base/allocator/partition_allocator/partition_root_base.h"
-#include "base/allocator/partition_allocator/spin_lock.h"
-#include "base/base_export.h"
-#include "base/bits.h"
-#include "base/compiler_specific.h"
-#include "base/logging.h"
-#include "base/macros.h"
-#include "base/sys_byteorder.h"
-#include "build_config.h"
-
-#if defined(MEMORY_TOOL_REPLACES_ALLOCATOR)
-#include <stdlib.h>
-#endif
-
-namespace base {
-
-class PartitionStatsDumper;
-
-enum PartitionPurgeFlags {
- // Decommitting the ring list of empty pages is reasonably fast.
- PartitionPurgeDecommitEmptyPages = 1 << 0,
- // Discarding unused system pages is slower, because it involves walking all
- // freelists in all active partition pages of all buckets >= system page
- // size. It often frees a similar amount of memory to decommitting the empty
- // pages, though.
- PartitionPurgeDiscardUnusedSystemPages = 1 << 1,
-};
-
-// Never instantiate a PartitionRoot directly, instead use PartitionAlloc.
-struct BASE_EXPORT PartitionRoot : public internal::PartitionRootBase {
- PartitionRoot();
- ~PartitionRoot() override;
- // This references the buckets OFF the edge of this struct. All uses of
- // PartitionRoot must have the bucket array come right after.
- //
- // The PartitionAlloc templated class ensures the following is correct.
- ALWAYS_INLINE internal::PartitionBucket* buckets() {
- return reinterpret_cast<internal::PartitionBucket*>(this + 1);
- }
- ALWAYS_INLINE const internal::PartitionBucket* buckets() const {
- return reinterpret_cast<const internal::PartitionBucket*>(this + 1);
- }
-
- void Init(size_t num_buckets, size_t max_allocation);
-
- ALWAYS_INLINE void* Alloc(size_t size, const char* type_name);
-
- void PurgeMemory(int flags);
-
- void DumpStats(const char* partition_name,
- bool is_light_dump,
- PartitionStatsDumper* dumper);
-};
-
-// Never instantiate a PartitionRootGeneric directly, instead use
-// PartitionAllocatorGeneric.
-struct BASE_EXPORT PartitionRootGeneric : public internal::PartitionRootBase {
- PartitionRootGeneric();
- ~PartitionRootGeneric() override;
- subtle::SpinLock lock;
- // Some pre-computed constants.
- size_t order_index_shifts[kBitsPerSizeT + 1] = {};
- size_t order_sub_index_masks[kBitsPerSizeT + 1] = {};
- // The bucket lookup table lets us map a size_t to a bucket quickly.
- // The trailing +1 caters for the overflow case for very large allocation
- // sizes. It is one flat array instead of a 2D array because in the 2D
- // world, we'd need to index array[blah][max+1] which risks undefined
- // behavior.
- internal::PartitionBucket*
- bucket_lookups[((kBitsPerSizeT + 1) * kGenericNumBucketsPerOrder) + 1] =
- {};
- internal::PartitionBucket buckets[kGenericNumBuckets] = {};
-
- // Public API.
- void Init();
-
- ALWAYS_INLINE void* Alloc(size_t size, const char* type_name);
- ALWAYS_INLINE void Free(void* ptr);
-
- NOINLINE void* Realloc(void* ptr, size_t new_size, const char* type_name);
-
- ALWAYS_INLINE size_t ActualSize(size_t size);
-
- void PurgeMemory(int flags);
-
- void DumpStats(const char* partition_name,
- bool is_light_dump,
- PartitionStatsDumper* partition_stats_dumper);
-};
-
-// Struct used to retrieve total memory usage of a partition. Used by
-// PartitionStatsDumper implementation.
-struct PartitionMemoryStats {
- size_t total_mmapped_bytes; // Total bytes mmaped from the system.
- size_t total_committed_bytes; // Total size of commmitted pages.
- size_t total_resident_bytes; // Total bytes provisioned by the partition.
- size_t total_active_bytes; // Total active bytes in the partition.
- size_t total_decommittable_bytes; // Total bytes that could be decommitted.
- size_t total_discardable_bytes; // Total bytes that could be discarded.
-};
-
-// Struct used to retrieve memory statistics about a partition bucket. Used by
-// PartitionStatsDumper implementation.
-struct PartitionBucketMemoryStats {
- bool is_valid; // Used to check if the stats is valid.
- bool is_direct_map; // True if this is a direct mapping; size will not be
- // unique.
- uint32_t bucket_slot_size; // The size of the slot in bytes.
- uint32_t allocated_page_size; // Total size the partition page allocated from
- // the system.
- uint32_t active_bytes; // Total active bytes used in the bucket.
- uint32_t resident_bytes; // Total bytes provisioned in the bucket.
- uint32_t decommittable_bytes; // Total bytes that could be decommitted.
- uint32_t discardable_bytes; // Total bytes that could be discarded.
- uint32_t num_full_pages; // Number of pages with all slots allocated.
- uint32_t num_active_pages; // Number of pages that have at least one
- // provisioned slot.
- uint32_t num_empty_pages; // Number of pages that are empty
- // but not decommitted.
- uint32_t num_decommitted_pages; // Number of pages that are empty
- // and decommitted.
-};
-
-// Interface that is passed to PartitionDumpStats and
-// PartitionDumpStatsGeneric for using the memory statistics.
-class BASE_EXPORT PartitionStatsDumper {
- public:
- // Called to dump total memory used by partition, once per partition.
- virtual void PartitionDumpTotals(const char* partition_name,
- const PartitionMemoryStats*) = 0;
-
- // Called to dump stats about buckets, for each bucket.
- virtual void PartitionsDumpBucketStats(const char* partition_name,
- const PartitionBucketMemoryStats*) = 0;
-};
-
-BASE_EXPORT void PartitionAllocGlobalInit(void (*oom_handling_function)());
-
-class BASE_EXPORT PartitionAllocHooks {
- public:
- typedef void AllocationHook(void* address, size_t, const char* type_name);
- typedef void FreeHook(void* address);
-
- // To unhook, call Set*Hook with nullptr.
- static void SetAllocationHook(AllocationHook* hook) {
- // Chained allocation hooks are not supported. Registering a non-null
- // hook when a non-null hook is already registered indicates somebody is
- // trying to overwrite a hook.
- CHECK(!hook || !allocation_hook_) << "Overwriting allocation hook";
- allocation_hook_ = hook;
- }
- static void SetFreeHook(FreeHook* hook) {
- CHECK(!hook || !free_hook_) << "Overwriting free hook";
- free_hook_ = hook;
- }
-
- static void AllocationHookIfEnabled(void* address,
- size_t size,
- const char* type_name) {
- AllocationHook* hook = allocation_hook_;
- if (UNLIKELY(hook != nullptr))
- hook(address, size, type_name);
- }
-
- static void FreeHookIfEnabled(void* address) {
- FreeHook* hook = free_hook_;
- if (UNLIKELY(hook != nullptr))
- hook(address);
- }
-
- static void ReallocHookIfEnabled(void* old_address,
- void* new_address,
- size_t size,
- const char* type_name) {
- // Report a reallocation as a free followed by an allocation.
- AllocationHook* allocation_hook = allocation_hook_;
- FreeHook* free_hook = free_hook_;
- if (UNLIKELY(allocation_hook && free_hook)) {
- free_hook(old_address);
- allocation_hook(new_address, size, type_name);
- }
- }
-
- private:
- // Pointers to hook functions that PartitionAlloc will call on allocation and
- // free if the pointers are non-null.
- static AllocationHook* allocation_hook_;
- static FreeHook* free_hook_;
-};
-
-ALWAYS_INLINE void* PartitionRoot::Alloc(size_t size, const char* type_name) {
-#if defined(MEMORY_TOOL_REPLACES_ALLOCATOR)
- void* result = malloc(size);
- CHECK(result);
- return result;
-#else
- size_t requested_size = size;
- size = internal::PartitionCookieSizeAdjustAdd(size);
- DCHECK(this->initialized);
- size_t index = size >> kBucketShift;
- DCHECK(index < this->num_buckets);
- DCHECK(size == index << kBucketShift);
- internal::PartitionBucket* bucket = &this->buckets()[index];
- void* result = AllocFromBucket(bucket, 0, size);
- PartitionAllocHooks::AllocationHookIfEnabled(result, requested_size,
- type_name);
- return result;
-#endif // defined(MEMORY_TOOL_REPLACES_ALLOCATOR)
-}
-
-ALWAYS_INLINE void PartitionFree(void* ptr) {
-#if defined(MEMORY_TOOL_REPLACES_ALLOCATOR)
- free(ptr);
-#else
- // TODO(palmer): Check ptr alignment before continuing. Shall we do the check
- // inside PartitionCookieFreePointerAdjust?
- PartitionAllocHooks::FreeHookIfEnabled(ptr);
- ptr = internal::PartitionCookieFreePointerAdjust(ptr);
- internal::PartitionPage* page = internal::PartitionPage::FromPointer(ptr);
- // TODO(palmer): See if we can afford to make this a CHECK.
- DCHECK(internal::PartitionRootBase::IsValidPage(page));
- page->Free(ptr);
-#endif
-}
-
-ALWAYS_INLINE internal::PartitionBucket* PartitionGenericSizeToBucket(
- PartitionRootGeneric* root,
- size_t size) {
- size_t order = kBitsPerSizeT - bits::CountLeadingZeroBitsSizeT(size);
- // The order index is simply the next few bits after the most significant bit.
- size_t order_index = (size >> root->order_index_shifts[order]) &
- (kGenericNumBucketsPerOrder - 1);
- // And if the remaining bits are non-zero we must bump the bucket up.
- size_t sub_order_index = size & root->order_sub_index_masks[order];
- internal::PartitionBucket* bucket =
- root->bucket_lookups[(order << kGenericNumBucketsPerOrderBits) +
- order_index + !!sub_order_index];
- DCHECK(!bucket->slot_size || bucket->slot_size >= size);
- DCHECK(!(bucket->slot_size % kGenericSmallestBucket));
- return bucket;
-}
-
-ALWAYS_INLINE void* PartitionAllocGenericFlags(PartitionRootGeneric* root,
- int flags,
- size_t size,
- const char* type_name) {
-#if defined(MEMORY_TOOL_REPLACES_ALLOCATOR)
- void* result = malloc(size);
- CHECK(result || flags & PartitionAllocReturnNull);
- return result;
-#else
- DCHECK(root->initialized);
- size_t requested_size = size;
- size = internal::PartitionCookieSizeAdjustAdd(size);
- internal::PartitionBucket* bucket = PartitionGenericSizeToBucket(root, size);
- void* ret = nullptr;
- {
- subtle::SpinLock::Guard guard(root->lock);
- ret = root->AllocFromBucket(bucket, flags, size);
- }
- PartitionAllocHooks::AllocationHookIfEnabled(ret, requested_size, type_name);
- return ret;
-#endif
-}
-
-ALWAYS_INLINE void* PartitionRootGeneric::Alloc(size_t size,
- const char* type_name) {
- return PartitionAllocGenericFlags(this, 0, size, type_name);
-}
-
-ALWAYS_INLINE void PartitionRootGeneric::Free(void* ptr) {
-#if defined(MEMORY_TOOL_REPLACES_ALLOCATOR)
- free(ptr);
-#else
- DCHECK(this->initialized);
-
- if (UNLIKELY(!ptr))
- return;
-
- PartitionAllocHooks::FreeHookIfEnabled(ptr);
- ptr = internal::PartitionCookieFreePointerAdjust(ptr);
- internal::PartitionPage* page = internal::PartitionPage::FromPointer(ptr);
- // TODO(palmer): See if we can afford to make this a CHECK.
- DCHECK(IsValidPage(page));
- {
- subtle::SpinLock::Guard guard(this->lock);
- page->Free(ptr);
- }
-#endif
-}
-
-BASE_EXPORT void* PartitionReallocGenericFlags(PartitionRootGeneric* root,
- int flags,
- void* ptr,
- size_t new_size,
- const char* type_name);
-
-ALWAYS_INLINE size_t PartitionRootGeneric::ActualSize(size_t size) {
-#if defined(MEMORY_TOOL_REPLACES_ALLOCATOR)
- return size;
-#else
- DCHECK(this->initialized);
- size = internal::PartitionCookieSizeAdjustAdd(size);
- internal::PartitionBucket* bucket = PartitionGenericSizeToBucket(this, size);
- if (LIKELY(!bucket->is_direct_mapped())) {
- size = bucket->slot_size;
- } else if (size > kGenericMaxDirectMapped) {
- // Too large to allocate => return the size unchanged.
- } else {
- size = internal::PartitionBucket::get_direct_map_size(size);
- }
- return internal::PartitionCookieSizeAdjustSubtract(size);
-#endif
-}
-
-ALWAYS_INLINE bool PartitionAllocSupportsGetSize() {
-#if defined(MEMORY_TOOL_REPLACES_ALLOCATOR)
- return false;
-#else
- return true;
-#endif
-}
-
-ALWAYS_INLINE size_t PartitionAllocGetSize(void* ptr) {
- // No need to lock here. Only |ptr| being freed by another thread could
- // cause trouble, and the caller is responsible for that not happening.
- DCHECK(PartitionAllocSupportsGetSize());
- ptr = internal::PartitionCookieFreePointerAdjust(ptr);
- internal::PartitionPage* page = internal::PartitionPage::FromPointer(ptr);
- // TODO(palmer): See if we can afford to make this a CHECK.
- DCHECK(internal::PartitionRootBase::IsValidPage(page));
- size_t size = page->bucket->slot_size;
- return internal::PartitionCookieSizeAdjustSubtract(size);
-}
-
-template <size_t N>
-class SizeSpecificPartitionAllocator {
- public:
- SizeSpecificPartitionAllocator() {
- memset(actual_buckets_, 0,
- sizeof(internal::PartitionBucket) * arraysize(actual_buckets_));
- }
- ~SizeSpecificPartitionAllocator() = default;
- static const size_t kMaxAllocation = N - kAllocationGranularity;
- static const size_t kNumBuckets = N / kAllocationGranularity;
- void init() { partition_root_.Init(kNumBuckets, kMaxAllocation); }
- ALWAYS_INLINE PartitionRoot* root() { return &partition_root_; }
-
- private:
- PartitionRoot partition_root_;
- internal::PartitionBucket actual_buckets_[kNumBuckets];
-};
-
-class BASE_EXPORT PartitionAllocatorGeneric {
- public:
- PartitionAllocatorGeneric();
- ~PartitionAllocatorGeneric();
-
- void init() { partition_root_.Init(); }
- ALWAYS_INLINE PartitionRootGeneric* root() { return &partition_root_; }
-
- private:
- PartitionRootGeneric partition_root_;
-};
-
-} // namespace base
-
-#endif // BASE_ALLOCATOR_PARTITION_ALLOCATOR_PARTITION_ALLOC_H_
diff --git a/base/allocator/partition_allocator/partition_alloc_constants.h b/base/allocator/partition_allocator/partition_alloc_constants.h
deleted file mode 100644
index deaa19e..0000000
--- a/base/allocator/partition_allocator/partition_alloc_constants.h
+++ /dev/null
@@ -1,161 +0,0 @@
-// Copyright (c) 2018 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#ifndef BASE_ALLOCATOR_PARTITION_ALLOCATOR_PARTITION_ALLOC_CONSTANTS_H_
-#define BASE_ALLOCATOR_PARTITION_ALLOCATOR_PARTITION_ALLOC_CONSTANTS_H_
-
-#include <limits.h>
-
-#include "base/allocator/partition_allocator/page_allocator_constants.h"
-#include "base/bits.h"
-#include "base/logging.h"
-
-namespace base {
-
-// Allocation granularity of sizeof(void*) bytes.
-static const size_t kAllocationGranularity = sizeof(void*);
-static const size_t kAllocationGranularityMask = kAllocationGranularity - 1;
-static const size_t kBucketShift = (kAllocationGranularity == 8) ? 3 : 2;
-
-// Underlying partition storage pages are a power-of-two size. It is typical
-// for a partition page to be based on multiple system pages. Most references to
-// "page" refer to partition pages.
-// We also have the concept of "super pages" -- these are the underlying system
-// allocations we make. Super pages contain multiple partition pages inside them
-// and include space for a small amount of metadata per partition page.
-// Inside super pages, we store "slot spans". A slot span is a continguous range
-// of one or more partition pages that stores allocations of the same size.
-// Slot span sizes are adjusted depending on the allocation size, to make sure
-// the packing does not lead to unused (wasted) space at the end of the last
-// system page of the span. For our current max slot span size of 64k and other
-// constant values, we pack _all_ PartitionRootGeneric::Alloc() sizes perfectly
-// up against the end of a system page.
-#if defined(_MIPS_ARCH_LOONGSON)
-static const size_t kPartitionPageShift = 16; // 64KB
-#else
-static const size_t kPartitionPageShift = 14; // 16KB
-#endif
-static const size_t kPartitionPageSize = 1 << kPartitionPageShift;
-static const size_t kPartitionPageOffsetMask = kPartitionPageSize - 1;
-static const size_t kPartitionPageBaseMask = ~kPartitionPageOffsetMask;
-static const size_t kMaxPartitionPagesPerSlotSpan = 4;
-
-// To avoid fragmentation via never-used freelist entries, we hand out partition
-// freelist sections gradually, in units of the dominant system page size.
-// What we're actually doing is avoiding filling the full partition page (16 KB)
-// with freelist pointers right away. Writing freelist pointers will fault and
-// dirty a private page, which is very wasteful if we never actually store
-// objects there.
-static const size_t kNumSystemPagesPerPartitionPage =
- kPartitionPageSize / kSystemPageSize;
-static const size_t kMaxSystemPagesPerSlotSpan =
- kNumSystemPagesPerPartitionPage * kMaxPartitionPagesPerSlotSpan;
-
-// We reserve virtual address space in 2MB chunks (aligned to 2MB as well).
-// These chunks are called "super pages". We do this so that we can store
-// metadata in the first few pages of each 2MB aligned section. This leads to
-// a very fast free(). We specifically choose 2MB because this virtual address
-// block represents a full but single PTE allocation on ARM, ia32 and x64.
-//
-// The layout of the super page is as follows. The sizes below are the same
-// for 32 bit and 64 bit.
-//
-// | Guard page (4KB) |
-// | Metadata page (4KB) |
-// | Guard pages (8KB) |
-// | Slot span |
-// | Slot span |
-// | ... |
-// | Slot span |
-// | Guard page (4KB) |
-//
-// - Each slot span is a contiguous range of one or more PartitionPages.
-// - The metadata page has the following format. Note that the PartitionPage
-// that is not at the head of a slot span is "unused". In other words,
-// the metadata for the slot span is stored only in the first PartitionPage
-// of the slot span. Metadata accesses to other PartitionPages are
-// redirected to the first PartitionPage.
-//
-// | SuperPageExtentEntry (32B) |
-// | PartitionPage of slot span 1 (32B, used) |
-// | PartitionPage of slot span 1 (32B, unused) |
-// | PartitionPage of slot span 1 (32B, unused) |
-// | PartitionPage of slot span 2 (32B, used) |
-// | PartitionPage of slot span 3 (32B, used) |
-// | ... |
-// | PartitionPage of slot span N (32B, unused) |
-//
-// A direct mapped page has a similar layout to fake it looking like a super
-// page:
-//
-// | Guard page (4KB) |
-// | Metadata page (4KB) |
-// | Guard pages (8KB) |
-// | Direct mapped object |
-// | Guard page (4KB) |
-//
-// - The metadata page has the following layout:
-//
-// | SuperPageExtentEntry (32B) |
-// | PartitionPage (32B) |
-// | PartitionBucket (32B) |
-// | PartitionDirectMapExtent (8B) |
-static const size_t kSuperPageShift = 21; // 2MB
-static const size_t kSuperPageSize = 1 << kSuperPageShift;
-static const size_t kSuperPageOffsetMask = kSuperPageSize - 1;
-static const size_t kSuperPageBaseMask = ~kSuperPageOffsetMask;
-static const size_t kNumPartitionPagesPerSuperPage =
- kSuperPageSize / kPartitionPageSize;
-
-// The following kGeneric* constants apply to the generic variants of the API.
-// The "order" of an allocation is closely related to the power-of-two size of
-// the allocation. More precisely, the order is the bit index of the
-// most-significant-bit in the allocation size, where the bit numbers starts
-// at index 1 for the least-significant-bit.
-// In terms of allocation sizes, order 0 covers 0, order 1 covers 1, order 2
-// covers 2->3, order 3 covers 4->7, order 4 covers 8->15.
-static const size_t kGenericMinBucketedOrder = 4; // 8 bytes.
-static const size_t kGenericMaxBucketedOrder =
- 20; // Largest bucketed order is 1<<(20-1) (storing 512KB -> almost 1MB)
-static const size_t kGenericNumBucketedOrders =
- (kGenericMaxBucketedOrder - kGenericMinBucketedOrder) + 1;
-// Eight buckets per order (for the higher orders), e.g. order 8 is 128, 144,
-// 160, ..., 240:
-static const size_t kGenericNumBucketsPerOrderBits = 3;
-static const size_t kGenericNumBucketsPerOrder =
- 1 << kGenericNumBucketsPerOrderBits;
-static const size_t kGenericNumBuckets =
- kGenericNumBucketedOrders * kGenericNumBucketsPerOrder;
-static const size_t kGenericSmallestBucket = 1
- << (kGenericMinBucketedOrder - 1);
-static const size_t kGenericMaxBucketSpacing =
- 1 << ((kGenericMaxBucketedOrder - 1) - kGenericNumBucketsPerOrderBits);
-static const size_t kGenericMaxBucketed =
- (1 << (kGenericMaxBucketedOrder - 1)) +
- ((kGenericNumBucketsPerOrder - 1) * kGenericMaxBucketSpacing);
-static const size_t kGenericMinDirectMappedDownsize =
- kGenericMaxBucketed +
- 1; // Limit when downsizing a direct mapping using realloc().
-static const size_t kGenericMaxDirectMapped =
- (1UL << 31) + kPageAllocationGranularity; // 2 GB plus one more page.
-static const size_t kBitsPerSizeT = sizeof(void*) * CHAR_BIT;
-
-// Constant for the memory reclaim logic.
-static const size_t kMaxFreeableSpans = 16;
-
-// If the total size in bytes of allocated but not committed pages exceeds this
-// value (probably it is a "out of virtual address space" crash),
-// a special crash stack trace is generated at |PartitionOutOfMemory|.
-// This is to distinguish "out of virtual address space" from
-// "out of physical memory" in crash reports.
-static const size_t kReasonableSizeOfUnusedPages = 1024 * 1024 * 1024; // 1GB
-
-// Flags for PartitionAllocGenericFlags.
-enum PartitionAllocFlags {
- PartitionAllocReturnNull = 1 << 0,
-};
-
-} // namespace base
-
-#endif // BASE_ALLOCATOR_PARTITION_ALLOCATOR_PARTITION_ALLOC_CONSTANTS_H_
diff --git a/base/allocator/partition_allocator/partition_bucket.cc b/base/allocator/partition_allocator/partition_bucket.cc
deleted file mode 100644
index f38b2ea..0000000
--- a/base/allocator/partition_allocator/partition_bucket.cc
+++ /dev/null
@@ -1,554 +0,0 @@
-// Copyright (c) 2018 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#include "base/allocator/partition_allocator/partition_bucket.h"
-#include "base/allocator/partition_allocator/oom.h"
-#include "base/allocator/partition_allocator/page_allocator.h"
-#include "base/allocator/partition_allocator/partition_alloc_constants.h"
-#include "base/allocator/partition_allocator/partition_direct_map_extent.h"
-#include "base/allocator/partition_allocator/partition_oom.h"
-#include "base/allocator/partition_allocator/partition_page.h"
-#include "base/allocator/partition_allocator/partition_root_base.h"
-#include "build_config.h"
-
-namespace base {
-namespace internal {
-
-namespace {
-
-ALWAYS_INLINE PartitionPage* PartitionDirectMap(PartitionRootBase* root,
- int flags,
- size_t raw_size) {
- size_t size = PartitionBucket::get_direct_map_size(raw_size);
-
- // Because we need to fake looking like a super page, we need to allocate
- // a bunch of system pages more than "size":
- // - The first few system pages are the partition page in which the super
- // page metadata is stored. We fault just one system page out of a partition
- // page sized clump.
- // - We add a trailing guard page on 32-bit (on 64-bit we rely on the
- // massive address space plus randomization instead).
- size_t map_size = size + kPartitionPageSize;
-#if !defined(ARCH_CPU_64_BITS)
- map_size += kSystemPageSize;
-#endif
- // Round up to the allocation granularity.
- map_size += kPageAllocationGranularityOffsetMask;
- map_size &= kPageAllocationGranularityBaseMask;
-
- // TODO: these pages will be zero-filled. Consider internalizing an
- // allocZeroed() API so we can avoid a memset() entirely in this case.
- char* ptr = reinterpret_cast<char*>(
- AllocPages(nullptr, map_size, kSuperPageSize, PageReadWrite));
- if (UNLIKELY(!ptr))
- return nullptr;
-
- size_t committed_page_size = size + kSystemPageSize;
- root->total_size_of_direct_mapped_pages += committed_page_size;
- root->IncreaseCommittedPages(committed_page_size);
-
- char* slot = ptr + kPartitionPageSize;
- CHECK(SetSystemPagesAccess(ptr + (kSystemPageSize * 2),
- kPartitionPageSize - (kSystemPageSize * 2),
- PageInaccessible));
-#if !defined(ARCH_CPU_64_BITS)
- CHECK(SetSystemPagesAccess(ptr, kSystemPageSize, PageInaccessible));
- CHECK(SetSystemPagesAccess(slot + size, kSystemPageSize, PageInaccessible));
-#endif
-
- PartitionSuperPageExtentEntry* extent =
- reinterpret_cast<PartitionSuperPageExtentEntry*>(
- PartitionSuperPageToMetadataArea(ptr));
- extent->root = root;
- // The new structures are all located inside a fresh system page so they
- // will all be zeroed out. These DCHECKs are for documentation.
- DCHECK(!extent->super_page_base);
- DCHECK(!extent->super_pages_end);
- DCHECK(!extent->next);
- PartitionPage* page = PartitionPage::FromPointerNoAlignmentCheck(slot);
- PartitionBucket* bucket = reinterpret_cast<PartitionBucket*>(
- reinterpret_cast<char*>(page) + (kPageMetadataSize * 2));
- DCHECK(!page->next_page);
- DCHECK(!page->num_allocated_slots);
- DCHECK(!page->num_unprovisioned_slots);
- DCHECK(!page->page_offset);
- DCHECK(!page->empty_cache_index);
- page->bucket = bucket;
- page->freelist_head = reinterpret_cast<PartitionFreelistEntry*>(slot);
- PartitionFreelistEntry* next_entry =
- reinterpret_cast<PartitionFreelistEntry*>(slot);
- next_entry->next = PartitionFreelistEntry::Transform(nullptr);
-
- DCHECK(!bucket->active_pages_head);
- DCHECK(!bucket->empty_pages_head);
- DCHECK(!bucket->decommitted_pages_head);
- DCHECK(!bucket->num_system_pages_per_slot_span);
- DCHECK(!bucket->num_full_pages);
- bucket->slot_size = size;
-
- PartitionDirectMapExtent* map_extent =
- PartitionDirectMapExtent::FromPage(page);
- map_extent->map_size = map_size - kPartitionPageSize - kSystemPageSize;
- map_extent->bucket = bucket;
-
- // Maintain the doubly-linked list of all direct mappings.
- map_extent->next_extent = root->direct_map_list;
- if (map_extent->next_extent)
- map_extent->next_extent->prev_extent = map_extent;
- map_extent->prev_extent = nullptr;
- root->direct_map_list = map_extent;
-
- return page;
-}
-
-} // namespace
-
-// static
-PartitionBucket PartitionBucket::sentinel_bucket_;
-
-PartitionBucket* PartitionBucket::get_sentinel_bucket() {
- return &sentinel_bucket_;
-}
-
-// TODO(ajwong): This seems to interact badly with
-// get_pages_per_slot_span() which rounds the value from this up to a
-// multiple of kNumSystemPagesPerPartitionPage (aka 4) anyways.
-// http://crbug.com/776537
-//
-// TODO(ajwong): The waste calculation seems wrong. The PTE usage should cover
-// both used and unsed pages.
-// http://crbug.com/776537
-uint8_t PartitionBucket::get_system_pages_per_slot_span() {
- // This works out reasonably for the current bucket sizes of the generic
- // allocator, and the current values of partition page size and constants.
- // Specifically, we have enough room to always pack the slots perfectly into
- // some number of system pages. The only waste is the waste associated with
- // unfaulted pages (i.e. wasted address space).
- // TODO: we end up using a lot of system pages for very small sizes. For
- // example, we'll use 12 system pages for slot size 24. The slot size is
- // so small that the waste would be tiny with just 4, or 1, system pages.
- // Later, we can investigate whether there are anti-fragmentation benefits
- // to using fewer system pages.
- double best_waste_ratio = 1.0f;
- uint16_t best_pages = 0;
- if (this->slot_size > kMaxSystemPagesPerSlotSpan * kSystemPageSize) {
- // TODO(ajwong): Why is there a DCHECK here for this?
- // http://crbug.com/776537
- DCHECK(!(this->slot_size % kSystemPageSize));
- best_pages = static_cast<uint16_t>(this->slot_size / kSystemPageSize);
- // TODO(ajwong): Should this be checking against
- // kMaxSystemPagesPerSlotSpan or numeric_limits<uint8_t>::max?
- // http://crbug.com/776537
- CHECK(best_pages < (1 << 8));
- return static_cast<uint8_t>(best_pages);
- }
- DCHECK(this->slot_size <= kMaxSystemPagesPerSlotSpan * kSystemPageSize);
- for (uint16_t i = kNumSystemPagesPerPartitionPage - 1;
- i <= kMaxSystemPagesPerSlotSpan; ++i) {
- size_t page_size = kSystemPageSize * i;
- size_t num_slots = page_size / this->slot_size;
- size_t waste = page_size - (num_slots * this->slot_size);
- // Leaving a page unfaulted is not free; the page will occupy an empty page
- // table entry. Make a simple attempt to account for that.
- //
- // TODO(ajwong): This looks wrong. PTEs are allocated for all pages
- // regardless of whether or not they are wasted. Should it just
- // be waste += i * sizeof(void*)?
- // http://crbug.com/776537
- size_t num_remainder_pages = i & (kNumSystemPagesPerPartitionPage - 1);
- size_t num_unfaulted_pages =
- num_remainder_pages
- ? (kNumSystemPagesPerPartitionPage - num_remainder_pages)
- : 0;
- waste += sizeof(void*) * num_unfaulted_pages;
- double waste_ratio = (double)waste / (double)page_size;
- if (waste_ratio < best_waste_ratio) {
- best_waste_ratio = waste_ratio;
- best_pages = i;
- }
- }
- DCHECK(best_pages > 0);
- CHECK(best_pages <= kMaxSystemPagesPerSlotSpan);
- return static_cast<uint8_t>(best_pages);
-}
-
-void PartitionBucket::Init(uint32_t new_slot_size) {
- slot_size = new_slot_size;
- active_pages_head = PartitionPage::get_sentinel_page();
- empty_pages_head = nullptr;
- decommitted_pages_head = nullptr;
- num_full_pages = 0;
- num_system_pages_per_slot_span = get_system_pages_per_slot_span();
-}
-
-NOINLINE void PartitionBucket::OnFull() {
- OOM_CRASH();
-}
-
-ALWAYS_INLINE void* PartitionBucket::AllocNewSlotSpan(
- PartitionRootBase* root,
- int flags,
- uint16_t num_partition_pages) {
- DCHECK(!(reinterpret_cast<uintptr_t>(root->next_partition_page) %
- kPartitionPageSize));
- DCHECK(!(reinterpret_cast<uintptr_t>(root->next_partition_page_end) %
- kPartitionPageSize));
- DCHECK(num_partition_pages <= kNumPartitionPagesPerSuperPage);
- size_t total_size = kPartitionPageSize * num_partition_pages;
- size_t num_partition_pages_left =
- (root->next_partition_page_end - root->next_partition_page) >>
- kPartitionPageShift;
- if (LIKELY(num_partition_pages_left >= num_partition_pages)) {
- // In this case, we can still hand out pages from the current super page
- // allocation.
- char* ret = root->next_partition_page;
-
- // Fresh System Pages in the SuperPages are decommited. Commit them
- // before vending them back.
- CHECK(SetSystemPagesAccess(ret, total_size, PageReadWrite));
-
- root->next_partition_page += total_size;
- root->IncreaseCommittedPages(total_size);
- return ret;
- }
-
- // Need a new super page. We want to allocate super pages in a continguous
- // address region as much as possible. This is important for not causing
- // page table bloat and not fragmenting address spaces in 32 bit
- // architectures.
- char* requestedAddress = root->next_super_page;
- char* super_page = reinterpret_cast<char*>(AllocPages(
- requestedAddress, kSuperPageSize, kSuperPageSize, PageReadWrite));
- if (UNLIKELY(!super_page))
- return nullptr;
-
- root->total_size_of_super_pages += kSuperPageSize;
- root->IncreaseCommittedPages(total_size);
-
- // |total_size| MUST be less than kSuperPageSize - (kPartitionPageSize*2).
- // This is a trustworthy value because num_partition_pages is not user
- // controlled.
- //
- // TODO(ajwong): Introduce a DCHECK.
- root->next_super_page = super_page + kSuperPageSize;
- char* ret = super_page + kPartitionPageSize;
- root->next_partition_page = ret + total_size;
- root->next_partition_page_end = root->next_super_page - kPartitionPageSize;
- // Make the first partition page in the super page a guard page, but leave a
- // hole in the middle.
- // This is where we put page metadata and also a tiny amount of extent
- // metadata.
- CHECK(SetSystemPagesAccess(super_page, kSystemPageSize, PageInaccessible));
- CHECK(SetSystemPagesAccess(super_page + (kSystemPageSize * 2),
- kPartitionPageSize - (kSystemPageSize * 2),
- PageInaccessible));
- // CHECK(SetSystemPagesAccess(super_page + (kSuperPageSize -
- // kPartitionPageSize),
- // kPartitionPageSize, PageInaccessible));
- // All remaining slotspans for the unallocated PartitionPages inside the
- // SuperPage are conceptually decommitted. Correctly set the state here
- // so they do not occupy resources.
- //
- // TODO(ajwong): Refactor Page Allocator API so the SuperPage comes in
- // decommited initially.
- CHECK(SetSystemPagesAccess(super_page + kPartitionPageSize + total_size,
- (kSuperPageSize - kPartitionPageSize - total_size),
- PageInaccessible));
-
- // If we were after a specific address, but didn't get it, assume that
- // the system chose a lousy address. Here most OS'es have a default
- // algorithm that isn't randomized. For example, most Linux
- // distributions will allocate the mapping directly before the last
- // successful mapping, which is far from random. So we just get fresh
- // randomness for the next mapping attempt.
- if (requestedAddress && requestedAddress != super_page)
- root->next_super_page = nullptr;
-
- // We allocated a new super page so update super page metadata.
- // First check if this is a new extent or not.
- PartitionSuperPageExtentEntry* latest_extent =
- reinterpret_cast<PartitionSuperPageExtentEntry*>(
- PartitionSuperPageToMetadataArea(super_page));
- // By storing the root in every extent metadata object, we have a fast way
- // to go from a pointer within the partition to the root object.
- latest_extent->root = root;
- // Most new extents will be part of a larger extent, and these three fields
- // are unused, but we initialize them to 0 so that we get a clear signal
- // in case they are accidentally used.
- latest_extent->super_page_base = nullptr;
- latest_extent->super_pages_end = nullptr;
- latest_extent->next = nullptr;
-
- PartitionSuperPageExtentEntry* current_extent = root->current_extent;
- bool isNewExtent = (super_page != requestedAddress);
- if (UNLIKELY(isNewExtent)) {
- if (UNLIKELY(!current_extent)) {
- DCHECK(!root->first_extent);
- root->first_extent = latest_extent;
- } else {
- DCHECK(current_extent->super_page_base);
- current_extent->next = latest_extent;
- }
- root->current_extent = latest_extent;
- latest_extent->super_page_base = super_page;
- latest_extent->super_pages_end = super_page + kSuperPageSize;
- } else {
- // We allocated next to an existing extent so just nudge the size up a
- // little.
- DCHECK(current_extent->super_pages_end);
- current_extent->super_pages_end += kSuperPageSize;
- DCHECK(ret >= current_extent->super_page_base &&
- ret < current_extent->super_pages_end);
- }
- return ret;
-}
-
-ALWAYS_INLINE uint16_t PartitionBucket::get_pages_per_slot_span() {
- // Rounds up to nearest multiple of kNumSystemPagesPerPartitionPage.
- return (num_system_pages_per_slot_span +
- (kNumSystemPagesPerPartitionPage - 1)) /
- kNumSystemPagesPerPartitionPage;
-}
-
-ALWAYS_INLINE void PartitionBucket::InitializeSlotSpan(PartitionPage* page) {
- // The bucket never changes. We set it up once.
- page->bucket = this;
- page->empty_cache_index = -1;
-
- page->Reset();
-
- // If this page has just a single slot, do not set up page offsets for any
- // page metadata other than the first one. This ensures that attempts to
- // touch invalid page metadata fail.
- if (page->num_unprovisioned_slots == 1)
- return;
-
- uint16_t num_partition_pages = get_pages_per_slot_span();
- char* page_char_ptr = reinterpret_cast<char*>(page);
- for (uint16_t i = 1; i < num_partition_pages; ++i) {
- page_char_ptr += kPageMetadataSize;
- PartitionPage* secondary_page =
- reinterpret_cast<PartitionPage*>(page_char_ptr);
- secondary_page->page_offset = i;
- }
-}
-
-ALWAYS_INLINE char* PartitionBucket::AllocAndFillFreelist(PartitionPage* page) {
- DCHECK(page != PartitionPage::get_sentinel_page());
- uint16_t num_slots = page->num_unprovisioned_slots;
- DCHECK(num_slots);
- // We should only get here when _every_ slot is either used or unprovisioned.
- // (The third state is "on the freelist". If we have a non-empty freelist, we
- // should not get here.)
- DCHECK(num_slots + page->num_allocated_slots == this->get_slots_per_span());
- // Similarly, make explicitly sure that the freelist is empty.
- DCHECK(!page->freelist_head);
- DCHECK(page->num_allocated_slots >= 0);
-
- size_t size = this->slot_size;
- char* base = reinterpret_cast<char*>(PartitionPage::ToPointer(page));
- char* return_object = base + (size * page->num_allocated_slots);
- char* first_freelist_pointer = return_object + size;
- char* first_freelist_pointer_extent =
- first_freelist_pointer + sizeof(PartitionFreelistEntry*);
- // Our goal is to fault as few system pages as possible. We calculate the
- // page containing the "end" of the returned slot, and then allow freelist
- // pointers to be written up to the end of that page.
- char* sub_page_limit = reinterpret_cast<char*>(
- RoundUpToSystemPage(reinterpret_cast<size_t>(first_freelist_pointer)));
- char* slots_limit = return_object + (size * num_slots);
- char* freelist_limit = sub_page_limit;
- if (UNLIKELY(slots_limit < freelist_limit))
- freelist_limit = slots_limit;
-
- uint16_t num_new_freelist_entries = 0;
- if (LIKELY(first_freelist_pointer_extent <= freelist_limit)) {
- // Only consider used space in the slot span. If we consider wasted
- // space, we may get an off-by-one when a freelist pointer fits in the
- // wasted space, but a slot does not.
- // We know we can fit at least one freelist pointer.
- num_new_freelist_entries = 1;
- // Any further entries require space for the whole slot span.
- num_new_freelist_entries += static_cast<uint16_t>(
- (freelist_limit - first_freelist_pointer_extent) / size);
- }
-
- // We always return an object slot -- that's the +1 below.
- // We do not neccessarily create any new freelist entries, because we cross
- // sub page boundaries frequently for large bucket sizes.
- DCHECK(num_new_freelist_entries + 1 <= num_slots);
- num_slots -= (num_new_freelist_entries + 1);
- page->num_unprovisioned_slots = num_slots;
- page->num_allocated_slots++;
-
- if (LIKELY(num_new_freelist_entries)) {
- char* freelist_pointer = first_freelist_pointer;
- PartitionFreelistEntry* entry =
- reinterpret_cast<PartitionFreelistEntry*>(freelist_pointer);
- page->freelist_head = entry;
- while (--num_new_freelist_entries) {
- freelist_pointer += size;
- PartitionFreelistEntry* next_entry =
- reinterpret_cast<PartitionFreelistEntry*>(freelist_pointer);
- entry->next = PartitionFreelistEntry::Transform(next_entry);
- entry = next_entry;
- }
- entry->next = PartitionFreelistEntry::Transform(nullptr);
- } else {
- page->freelist_head = nullptr;
- }
- return return_object;
-}
-
-bool PartitionBucket::SetNewActivePage() {
- PartitionPage* page = this->active_pages_head;
- if (page == PartitionPage::get_sentinel_page())
- return false;
-
- PartitionPage* next_page;
-
- for (; page; page = next_page) {
- next_page = page->next_page;
- DCHECK(page->bucket == this);
- DCHECK(page != this->empty_pages_head);
- DCHECK(page != this->decommitted_pages_head);
-
- if (LIKELY(page->is_active())) {
- // This page is usable because it has freelist entries, or has
- // unprovisioned slots we can create freelist entries from.
- this->active_pages_head = page;
- return true;
- }
-
- // Deal with empty and decommitted pages.
- if (LIKELY(page->is_empty())) {
- page->next_page = this->empty_pages_head;
- this->empty_pages_head = page;
- } else if (LIKELY(page->is_decommitted())) {
- page->next_page = this->decommitted_pages_head;
- this->decommitted_pages_head = page;
- } else {
- DCHECK(page->is_full());
- // If we get here, we found a full page. Skip over it too, and also
- // tag it as full (via a negative value). We need it tagged so that
- // free'ing can tell, and move it back into the active page list.
- page->num_allocated_slots = -page->num_allocated_slots;
- ++this->num_full_pages;
- // num_full_pages is a uint16_t for efficient packing so guard against
- // overflow to be safe.
- if (UNLIKELY(!this->num_full_pages))
- OnFull();
- // Not necessary but might help stop accidents.
- page->next_page = nullptr;
- }
- }
-
- this->active_pages_head = PartitionPage::get_sentinel_page();
- return false;
-}
-
-void* PartitionBucket::SlowPathAlloc(PartitionRootBase* root,
- int flags,
- size_t size) {
- // The slow path is called when the freelist is empty.
- DCHECK(!this->active_pages_head->freelist_head);
-
- PartitionPage* new_page = nullptr;
-
- // For the PartitionRootGeneric::Alloc() API, we have a bunch of buckets
- // marked as special cases. We bounce them through to the slow path so that
- // we can still have a blazing fast hot path due to lack of corner-case
- // branches.
- //
- // Note: The ordering of the conditionals matter! In particular,
- // SetNewActivePage() has a side-effect even when returning
- // false where it sweeps the active page list and may move things into
- // the empty or decommitted lists which affects the subsequent conditional.
- bool return_null = flags & PartitionAllocReturnNull;
- if (UNLIKELY(this->is_direct_mapped())) {
- DCHECK(size > kGenericMaxBucketed);
- DCHECK(this == get_sentinel_bucket());
- DCHECK(this->active_pages_head == PartitionPage::get_sentinel_page());
- if (size > kGenericMaxDirectMapped) {
- if (return_null)
- return nullptr;
- PartitionExcessiveAllocationSize();
- }
- new_page = PartitionDirectMap(root, flags, size);
- } else if (LIKELY(this->SetNewActivePage())) {
- // First, did we find an active page in the active pages list?
- new_page = this->active_pages_head;
- DCHECK(new_page->is_active());
- } else if (LIKELY(this->empty_pages_head != nullptr) ||
- LIKELY(this->decommitted_pages_head != nullptr)) {
- // Second, look in our lists of empty and decommitted pages.
- // Check empty pages first, which are preferred, but beware that an
- // empty page might have been decommitted.
- while (LIKELY((new_page = this->empty_pages_head) != nullptr)) {
- DCHECK(new_page->bucket == this);
- DCHECK(new_page->is_empty() || new_page->is_decommitted());
- this->empty_pages_head = new_page->next_page;
- // Accept the empty page unless it got decommitted.
- if (new_page->freelist_head) {
- new_page->next_page = nullptr;
- break;
- }
- DCHECK(new_page->is_decommitted());
- new_page->next_page = this->decommitted_pages_head;
- this->decommitted_pages_head = new_page;
- }
- if (UNLIKELY(!new_page) &&
- LIKELY(this->decommitted_pages_head != nullptr)) {
- new_page = this->decommitted_pages_head;
- DCHECK(new_page->bucket == this);
- DCHECK(new_page->is_decommitted());
- this->decommitted_pages_head = new_page->next_page;
- void* addr = PartitionPage::ToPointer(new_page);
- root->RecommitSystemPages(addr, new_page->bucket->get_bytes_per_span());
- new_page->Reset();
- }
- DCHECK(new_page);
- } else {
- // Third. If we get here, we need a brand new page.
- uint16_t num_partition_pages = this->get_pages_per_slot_span();
- void* rawPages = AllocNewSlotSpan(root, flags, num_partition_pages);
- if (LIKELY(rawPages != nullptr)) {
- new_page = PartitionPage::FromPointerNoAlignmentCheck(rawPages);
- InitializeSlotSpan(new_page);
- }
- }
-
- // Bail if we had a memory allocation failure.
- if (UNLIKELY(!new_page)) {
- DCHECK(this->active_pages_head == PartitionPage::get_sentinel_page());
- if (return_null)
- return nullptr;
- root->OutOfMemory();
- }
-
- // TODO(ajwong): Is there a way to avoid the reading of bucket here?
- // It seems like in many of the conditional branches above, |this| ==
- // |new_page->bucket|. Maybe pull this into another function?
- PartitionBucket* bucket = new_page->bucket;
- DCHECK(bucket != get_sentinel_bucket());
- bucket->active_pages_head = new_page;
- new_page->set_raw_size(size);
-
- // If we found an active page with free slots, or an empty page, we have a
- // usable freelist head.
- if (LIKELY(new_page->freelist_head != nullptr)) {
- PartitionFreelistEntry* entry = new_page->freelist_head;
- PartitionFreelistEntry* new_head =
- PartitionFreelistEntry::Transform(entry->next);
- new_page->freelist_head = new_head;
- new_page->num_allocated_slots++;
- return entry;
- }
- // Otherwise, we need to build the freelist.
- DCHECK(new_page->num_unprovisioned_slots);
- return AllocAndFillFreelist(new_page);
-}
-
-} // namespace internal
-} // namespace base
diff --git a/base/allocator/partition_allocator/partition_bucket.h b/base/allocator/partition_allocator/partition_bucket.h
deleted file mode 100644
index a626dfa..0000000
--- a/base/allocator/partition_allocator/partition_bucket.h
+++ /dev/null
@@ -1,121 +0,0 @@
-// Copyright (c) 2018 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#ifndef BASE_ALLOCATOR_PARTITION_ALLOCATOR_PARTITION_BUCKET_H_
-#define BASE_ALLOCATOR_PARTITION_ALLOCATOR_PARTITION_BUCKET_H_
-
-#include <stddef.h>
-#include <stdint.h>
-
-#include "base/allocator/partition_allocator/partition_alloc_constants.h"
-#include "base/base_export.h"
-#include "base/compiler_specific.h"
-
-namespace base {
-namespace internal {
-
-struct PartitionPage;
-struct PartitionRootBase;
-
-struct PartitionBucket {
- // Accessed most in hot path => goes first.
- PartitionPage* active_pages_head;
-
- PartitionPage* empty_pages_head;
- PartitionPage* decommitted_pages_head;
- uint32_t slot_size;
- uint32_t num_system_pages_per_slot_span : 8;
- uint32_t num_full_pages : 24;
-
- // Public API.
- void Init(uint32_t new_slot_size);
-
- // Note the matching Free() functions are in PartitionPage.
- BASE_EXPORT NOINLINE void* SlowPathAlloc(PartitionRootBase* root,
- int flags,
- size_t size);
-
- ALWAYS_INLINE bool is_direct_mapped() const {
- return !num_system_pages_per_slot_span;
- }
- ALWAYS_INLINE size_t get_bytes_per_span() const {
- // TODO(ajwong): Change to CheckedMul. https://crbug.com/787153
- // https://crbug.com/680657
- return num_system_pages_per_slot_span * kSystemPageSize;
- }
- ALWAYS_INLINE uint16_t get_slots_per_span() const {
- // TODO(ajwong): Change to CheckedMul. https://crbug.com/787153
- // https://crbug.com/680657
- return static_cast<uint16_t>(get_bytes_per_span() / slot_size);
- }
-
- static ALWAYS_INLINE size_t get_direct_map_size(size_t size) {
- // Caller must check that the size is not above the kGenericMaxDirectMapped
- // limit before calling. This also guards against integer overflow in the
- // calculation here.
- DCHECK(size <= kGenericMaxDirectMapped);
- return (size + kSystemPageOffsetMask) & kSystemPageBaseMask;
- }
-
- // TODO(ajwong): Can this be made private? https://crbug.com/787153
- static PartitionBucket* get_sentinel_bucket();
-
- // This helper function scans a bucket's active page list for a suitable new
- // active page. When it finds a suitable new active page (one that has
- // free slots and is not empty), it is set as the new active page. If there
- // is no suitable new active page, the current active page is set to
- // PartitionPage::get_sentinel_page(). As potential pages are scanned, they
- // are tidied up according to their state. Empty pages are swept on to the
- // empty page list, decommitted pages on to the decommitted page list and full
- // pages are unlinked from any list.
- //
- // This is where the guts of the bucket maintenance is done!
- bool SetNewActivePage();
-
- private:
- static void OutOfMemory(const PartitionRootBase* root);
- static void OutOfMemoryWithLotsOfUncommitedPages();
-
- static NOINLINE void OnFull();
-
- // Returns a natural number of PartitionPages (calculated by
- // get_system_pages_per_slot_span()) to allocate from the current
- // SuperPage when the bucket runs out of slots.
- ALWAYS_INLINE uint16_t get_pages_per_slot_span();
-
- // Returns the number of system pages in a slot span.
- //
- // The calculation attemps to find the best number of System Pages to
- // allocate for the given slot_size to minimize wasted space. It uses a
- // heuristic that looks at number of bytes wasted after the last slot and
- // attempts to account for the PTE usage of each System Page.
- uint8_t get_system_pages_per_slot_span();
-
- // Allocates a new slot span with size |num_partition_pages| from the
- // current extent. Metadata within this slot span will be uninitialized.
- // Returns nullptr on error.
- ALWAYS_INLINE void* AllocNewSlotSpan(PartitionRootBase* root,
- int flags,
- uint16_t num_partition_pages);
-
- // Each bucket allocates a slot span when it runs out of slots.
- // A slot span's size is equal to get_pages_per_slot_span() number of
- // PartitionPages. This function initializes all PartitionPage within the
- // span to point to the first PartitionPage which holds all the metadata
- // for the span and registers this bucket as the owner of the span. It does
- // NOT put the slots into the bucket's freelist.
- ALWAYS_INLINE void InitializeSlotSpan(PartitionPage* page);
-
- // Allocates one slot from the given |page| and then adds the remainder to
- // the current bucket. If the |page| was freshly allocated, it must have been
- // passed through InitializeSlotSpan() first.
- ALWAYS_INLINE char* AllocAndFillFreelist(PartitionPage* page);
-
- static PartitionBucket sentinel_bucket_;
-};
-
-} // namespace internal
-} // namespace base
-
-#endif // BASE_ALLOCATOR_PARTITION_ALLOCATOR_PARTITION_BUCKET_H_
diff --git a/base/allocator/partition_allocator/partition_cookie.h b/base/allocator/partition_allocator/partition_cookie.h
deleted file mode 100644
index 8e6cb20..0000000
--- a/base/allocator/partition_allocator/partition_cookie.h
+++ /dev/null
@@ -1,72 +0,0 @@
-// Copyright (c) 2018 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#ifndef BASE_ALLOCATOR_PARTITION_ALLOCATOR_PARTITION_COOKIE_H_
-#define BASE_ALLOCATOR_PARTITION_ALLOCATOR_PARTITION_COOKIE_H_
-
-#include "base/compiler_specific.h"
-#include "base/logging.h"
-
-namespace base {
-namespace internal {
-
-#if DCHECK_IS_ON()
-// These two byte values match tcmalloc.
-static const unsigned char kUninitializedByte = 0xAB;
-static const unsigned char kFreedByte = 0xCD;
-static const size_t kCookieSize =
- 16; // Handles alignment up to XMM instructions on Intel.
-static const unsigned char kCookieValue[kCookieSize] = {
- 0xDE, 0xAD, 0xBE, 0xEF, 0xCA, 0xFE, 0xD0, 0x0D,
- 0x13, 0x37, 0xF0, 0x05, 0xBA, 0x11, 0xAB, 0x1E};
-#endif
-
-ALWAYS_INLINE void PartitionCookieCheckValue(void* ptr) {
-#if DCHECK_IS_ON()
- unsigned char* cookie_ptr = reinterpret_cast<unsigned char*>(ptr);
- for (size_t i = 0; i < kCookieSize; ++i, ++cookie_ptr)
- DCHECK(*cookie_ptr == kCookieValue[i]);
-#endif
-}
-
-ALWAYS_INLINE size_t PartitionCookieSizeAdjustAdd(size_t size) {
-#if DCHECK_IS_ON()
- // Add space for cookies, checking for integer overflow. TODO(palmer):
- // Investigate the performance and code size implications of using
- // CheckedNumeric throughout PA.
- DCHECK(size + (2 * kCookieSize) > size);
- size += 2 * kCookieSize;
-#endif
- return size;
-}
-
-ALWAYS_INLINE void* PartitionCookieFreePointerAdjust(void* ptr) {
-#if DCHECK_IS_ON()
- // The value given to the application is actually just after the cookie.
- ptr = static_cast<char*>(ptr) - kCookieSize;
-#endif
- return ptr;
-}
-
-ALWAYS_INLINE size_t PartitionCookieSizeAdjustSubtract(size_t size) {
-#if DCHECK_IS_ON()
- // Remove space for cookies.
- DCHECK(size >= 2 * kCookieSize);
- size -= 2 * kCookieSize;
-#endif
- return size;
-}
-
-ALWAYS_INLINE void PartitionCookieWriteValue(void* ptr) {
-#if DCHECK_IS_ON()
- unsigned char* cookie_ptr = reinterpret_cast<unsigned char*>(ptr);
- for (size_t i = 0; i < kCookieSize; ++i, ++cookie_ptr)
- *cookie_ptr = kCookieValue[i];
-#endif
-}
-
-} // namespace internal
-} // namespace base
-
-#endif // BASE_ALLOCATOR_PARTITION_ALLOCATOR_PARTITION_COOKIE_H_
diff --git a/base/allocator/partition_allocator/partition_direct_map_extent.h b/base/allocator/partition_allocator/partition_direct_map_extent.h
deleted file mode 100644
index 2a0bb19..0000000
--- a/base/allocator/partition_allocator/partition_direct_map_extent.h
+++ /dev/null
@@ -1,33 +0,0 @@
-// Copyright (c) 2018 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#ifndef BASE_ALLOCATOR_PARTITION_ALLOCATOR_PARTITION_DIRECT_MAP_EXTENT_H_
-#define BASE_ALLOCATOR_PARTITION_ALLOCATOR_PARTITION_DIRECT_MAP_EXTENT_H_
-
-#include "base/allocator/partition_allocator/partition_bucket.h"
-#include "base/allocator/partition_allocator/partition_page.h"
-
-namespace base {
-namespace internal {
-
-struct PartitionDirectMapExtent {
- PartitionDirectMapExtent* next_extent;
- PartitionDirectMapExtent* prev_extent;
- PartitionBucket* bucket;
- size_t map_size; // Mapped size, not including guard pages and meta-data.
-
- ALWAYS_INLINE static PartitionDirectMapExtent* FromPage(PartitionPage* page);
-};
-
-ALWAYS_INLINE PartitionDirectMapExtent* PartitionDirectMapExtent::FromPage(
- PartitionPage* page) {
- DCHECK(page->bucket->is_direct_mapped());
- return reinterpret_cast<PartitionDirectMapExtent*>(
- reinterpret_cast<char*>(page) + 3 * kPageMetadataSize);
-}
-
-} // namespace internal
-} // namespace base
-
-#endif // BASE_ALLOCATOR_PARTITION_ALLOCATOR_PARTITION_DIRECT_MAP_EXTENT_H_
diff --git a/base/allocator/partition_allocator/partition_freelist_entry.h b/base/allocator/partition_allocator/partition_freelist_entry.h
deleted file mode 100644
index c9fe004..0000000
--- a/base/allocator/partition_allocator/partition_freelist_entry.h
+++ /dev/null
@@ -1,48 +0,0 @@
-// Copyright (c) 2018 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#ifndef BASE_ALLOCATOR_PARTITION_ALLOCATOR_PARTITION_FREELIST_ENTRY_H_
-#define BASE_ALLOCATOR_PARTITION_ALLOCATOR_PARTITION_FREELIST_ENTRY_H_
-
-#include <stdint.h>
-
-#include "base/allocator/partition_allocator/partition_alloc_constants.h"
-#include "base/compiler_specific.h"
-#include "base/sys_byteorder.h"
-#include "build_config.h"
-
-namespace base {
-namespace internal {
-
-// TODO(ajwong): Introduce an EncodedFreelistEntry type and then replace
-// Transform() with Encode()/Decode() such that the API provides some static
-// type safety.
-//
-// https://crbug.com/787153
-struct PartitionFreelistEntry {
- PartitionFreelistEntry* next;
-
- static ALWAYS_INLINE PartitionFreelistEntry* Transform(
- PartitionFreelistEntry* ptr) {
-// We use bswap on little endian as a fast mask for two reasons:
-// 1) If an object is freed and its vtable used where the attacker doesn't
-// get the chance to run allocations between the free and use, the vtable
-// dereference is likely to fault.
-// 2) If the attacker has a linear buffer overflow and elects to try and
-// corrupt a freelist pointer, partial pointer overwrite attacks are
-// thwarted.
-// For big endian, similar guarantees are arrived at with a negation.
-#if defined(ARCH_CPU_BIG_ENDIAN)
- uintptr_t masked = ~reinterpret_cast<uintptr_t>(ptr);
-#else
- uintptr_t masked = ByteSwapUintPtrT(reinterpret_cast<uintptr_t>(ptr));
-#endif
- return reinterpret_cast<PartitionFreelistEntry*>(masked);
- }
-};
-
-} // namespace internal
-} // namespace base
-
-#endif // BASE_ALLOCATOR_PARTITION_ALLOCATOR_PARTITION_FREELIST_ENTRY_H_
diff --git a/base/allocator/partition_allocator/partition_oom.cc b/base/allocator/partition_allocator/partition_oom.cc
deleted file mode 100644
index 6476761..0000000
--- a/base/allocator/partition_allocator/partition_oom.cc
+++ /dev/null
@@ -1,24 +0,0 @@
-// Copyright (c) 2018 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#include "base/allocator/partition_allocator/partition_oom.h"
-
-#include "base/allocator/partition_allocator/oom.h"
-#include "build_config.h"
-
-namespace base {
-namespace internal {
-
-void NOINLINE PartitionExcessiveAllocationSize() {
- OOM_CRASH();
-}
-
-#if !defined(ARCH_CPU_64_BITS)
-NOINLINE void PartitionOutOfMemoryWithLotsOfUncommitedPages() {
- OOM_CRASH();
-}
-#endif
-
-} // namespace internal
-} // namespace base
diff --git a/base/allocator/partition_allocator/partition_oom.h b/base/allocator/partition_allocator/partition_oom.h
deleted file mode 100644
index 242da38..0000000
--- a/base/allocator/partition_allocator/partition_oom.h
+++ /dev/null
@@ -1,26 +0,0 @@
-// Copyright (c) 2018 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-// Holds functions for generating OOM errors from PartitionAlloc. This is
-// distinct from oom.h in that it is meant only for use in PartitionAlloc.
-
-#ifndef BASE_ALLOCATOR_PARTITION_ALLOCATOR_PARTITION_OOM_H_
-#define BASE_ALLOCATOR_PARTITION_ALLOCATOR_PARTITION_OOM_H_
-
-#include "base/compiler_specific.h"
-#include "build_config.h"
-
-namespace base {
-namespace internal {
-
-NOINLINE void PartitionExcessiveAllocationSize();
-
-#if !defined(ARCH_CPU_64_BITS)
-NOINLINE void PartitionOutOfMemoryWithLotsOfUncommitedPages();
-#endif
-
-} // namespace internal
-} // namespace base
-
-#endif // BASE_ALLOCATOR_PARTITION_ALLOCATOR_PARTITION_OOM_H_
diff --git a/base/allocator/partition_allocator/partition_page.cc b/base/allocator/partition_allocator/partition_page.cc
deleted file mode 100644
index 3c9e041..0000000
--- a/base/allocator/partition_allocator/partition_page.cc
+++ /dev/null
@@ -1,163 +0,0 @@
-// Copyright (c) 2018 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#include "base/allocator/partition_allocator/partition_page.h"
-
-#include "base/allocator/partition_allocator/partition_direct_map_extent.h"
-#include "base/allocator/partition_allocator/partition_root_base.h"
-
-namespace base {
-namespace internal {
-
-namespace {
-
-ALWAYS_INLINE void PartitionDirectUnmap(PartitionPage* page) {
- PartitionRootBase* root = PartitionRootBase::FromPage(page);
- const PartitionDirectMapExtent* extent =
- PartitionDirectMapExtent::FromPage(page);
- size_t unmap_size = extent->map_size;
-
- // Maintain the doubly-linked list of all direct mappings.
- if (extent->prev_extent) {
- DCHECK(extent->prev_extent->next_extent == extent);
- extent->prev_extent->next_extent = extent->next_extent;
- } else {
- root->direct_map_list = extent->next_extent;
- }
- if (extent->next_extent) {
- DCHECK(extent->next_extent->prev_extent == extent);
- extent->next_extent->prev_extent = extent->prev_extent;
- }
-
- // Add on the size of the trailing guard page and preceeding partition
- // page.
- unmap_size += kPartitionPageSize + kSystemPageSize;
-
- size_t uncommitted_page_size = page->bucket->slot_size + kSystemPageSize;
- root->DecreaseCommittedPages(uncommitted_page_size);
- DCHECK(root->total_size_of_direct_mapped_pages >= uncommitted_page_size);
- root->total_size_of_direct_mapped_pages -= uncommitted_page_size;
-
- DCHECK(!(unmap_size & kPageAllocationGranularityOffsetMask));
-
- char* ptr = reinterpret_cast<char*>(PartitionPage::ToPointer(page));
- // Account for the mapping starting a partition page before the actual
- // allocation address.
- ptr -= kPartitionPageSize;
-
- FreePages(ptr, unmap_size);
-}
-
-ALWAYS_INLINE void PartitionRegisterEmptyPage(PartitionPage* page) {
- DCHECK(page->is_empty());
- PartitionRootBase* root = PartitionRootBase::FromPage(page);
-
- // If the page is already registered as empty, give it another life.
- if (page->empty_cache_index != -1) {
- DCHECK(page->empty_cache_index >= 0);
- DCHECK(static_cast<unsigned>(page->empty_cache_index) < kMaxFreeableSpans);
- DCHECK(root->global_empty_page_ring[page->empty_cache_index] == page);
- root->global_empty_page_ring[page->empty_cache_index] = nullptr;
- }
-
- int16_t current_index = root->global_empty_page_ring_index;
- PartitionPage* page_to_decommit = root->global_empty_page_ring[current_index];
- // The page might well have been re-activated, filled up, etc. before we get
- // around to looking at it here.
- if (page_to_decommit)
- page_to_decommit->DecommitIfPossible(root);
-
- // We put the empty slot span on our global list of "pages that were once
- // empty". thus providing it a bit of breathing room to get re-used before
- // we really free it. This improves performance, particularly on Mac OS X
- // which has subpar memory management performance.
- root->global_empty_page_ring[current_index] = page;
- page->empty_cache_index = current_index;
- ++current_index;
- if (current_index == kMaxFreeableSpans)
- current_index = 0;
- root->global_empty_page_ring_index = current_index;
-}
-
-} // namespace
-
-// static
-PartitionPage PartitionPage::sentinel_page_;
-
-PartitionPage* PartitionPage::get_sentinel_page() {
- return &sentinel_page_;
-}
-
-void PartitionPage::FreeSlowPath() {
- DCHECK(this != get_sentinel_page());
- if (LIKELY(this->num_allocated_slots == 0)) {
- // Page became fully unused.
- if (UNLIKELY(bucket->is_direct_mapped())) {
- PartitionDirectUnmap(this);
- return;
- }
- // If it's the current active page, change it. We bounce the page to
- // the empty list as a force towards defragmentation.
- if (LIKELY(this == bucket->active_pages_head))
- bucket->SetNewActivePage();
- DCHECK(bucket->active_pages_head != this);
-
- set_raw_size(0);
- DCHECK(!get_raw_size());
-
- PartitionRegisterEmptyPage(this);
- } else {
- DCHECK(!bucket->is_direct_mapped());
- // Ensure that the page is full. That's the only valid case if we
- // arrive here.
- DCHECK(this->num_allocated_slots < 0);
- // A transition of num_allocated_slots from 0 to -1 is not legal, and
- // likely indicates a double-free.
- CHECK(this->num_allocated_slots != -1);
- this->num_allocated_slots = -this->num_allocated_slots - 2;
- DCHECK(this->num_allocated_slots == bucket->get_slots_per_span() - 1);
- // Fully used page became partially used. It must be put back on the
- // non-full page list. Also make it the current page to increase the
- // chances of it being filled up again. The old current page will be
- // the next page.
- DCHECK(!this->next_page);
- if (LIKELY(bucket->active_pages_head != get_sentinel_page()))
- this->next_page = bucket->active_pages_head;
- bucket->active_pages_head = this;
- --bucket->num_full_pages;
- // Special case: for a partition page with just a single slot, it may
- // now be empty and we want to run it through the empty logic.
- if (UNLIKELY(this->num_allocated_slots == 0))
- FreeSlowPath();
- }
-}
-
-void PartitionPage::Decommit(PartitionRootBase* root) {
- DCHECK(is_empty());
- DCHECK(!bucket->is_direct_mapped());
- void* addr = PartitionPage::ToPointer(this);
- root->DecommitSystemPages(addr, bucket->get_bytes_per_span());
-
- // We actually leave the decommitted page in the active list. We'll sweep
- // it on to the decommitted page list when we next walk the active page
- // list.
- // Pulling this trick enables us to use a singly-linked page list for all
- // cases, which is critical in keeping the page metadata structure down to
- // 32 bytes in size.
- freelist_head = nullptr;
- num_unprovisioned_slots = 0;
- DCHECK(is_decommitted());
-}
-
-void PartitionPage::DecommitIfPossible(PartitionRootBase* root) {
- DCHECK(empty_cache_index >= 0);
- DCHECK(static_cast<unsigned>(empty_cache_index) < kMaxFreeableSpans);
- DCHECK(this == root->global_empty_page_ring[empty_cache_index]);
- empty_cache_index = -1;
- if (is_empty())
- Decommit(root);
-}
-
-} // namespace internal
-} // namespace base
diff --git a/base/allocator/partition_allocator/partition_page.h b/base/allocator/partition_allocator/partition_page.h
deleted file mode 100644
index e6a6eb7..0000000
--- a/base/allocator/partition_allocator/partition_page.h
+++ /dev/null
@@ -1,288 +0,0 @@
-// Copyright (c) 2018 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#ifndef BASE_ALLOCATOR_PARTITION_ALLOCATOR_PARTITION_PAGE_H_
-#define BASE_ALLOCATOR_PARTITION_ALLOCATOR_PARTITION_PAGE_H_
-
-#include "base/allocator/partition_allocator/partition_alloc_constants.h"
-#include "base/allocator/partition_allocator/partition_bucket.h"
-#include "base/allocator/partition_allocator/partition_cookie.h"
-#include "base/allocator/partition_allocator/partition_freelist_entry.h"
-
-namespace base {
-namespace internal {
-
-struct PartitionRootBase;
-
-// Some notes on page states. A page can be in one of four major states:
-// 1) Active.
-// 2) Full.
-// 3) Empty.
-// 4) Decommitted.
-// An active page has available free slots. A full page has no free slots. An
-// empty page has no free slots, and a decommitted page is an empty page that
-// had its backing memory released back to the system.
-// There are two linked lists tracking the pages. The "active page" list is an
-// approximation of a list of active pages. It is an approximation because
-// full, empty and decommitted pages may briefly be present in the list until
-// we next do a scan over it.
-// The "empty page" list is an accurate list of pages which are either empty
-// or decommitted.
-//
-// The significant page transitions are:
-// - free() will detect when a full page has a slot free()'d and immediately
-// return the page to the head of the active list.
-// - free() will detect when a page is fully emptied. It _may_ add it to the
-// empty list or it _may_ leave it on the active list until a future list scan.
-// - malloc() _may_ scan the active page list in order to fulfil the request.
-// If it does this, full, empty and decommitted pages encountered will be
-// booted out of the active list. If there are no suitable active pages found,
-// an empty or decommitted page (if one exists) will be pulled from the empty
-// list on to the active list.
-//
-// TODO(ajwong): Evaluate if this should be named PartitionSlotSpanMetadata or
-// similar. If so, all uses of the term "page" in comments, member variables,
-// local variables, and documentation that refer to this concept should be
-// updated.
-struct PartitionPage {
- PartitionFreelistEntry* freelist_head;
- PartitionPage* next_page;
- PartitionBucket* bucket;
- // Deliberately signed, 0 for empty or decommitted page, -n for full pages:
- int16_t num_allocated_slots;
- uint16_t num_unprovisioned_slots;
- uint16_t page_offset;
- int16_t empty_cache_index; // -1 if not in the empty cache.
-
- // Public API
-
- // Note the matching Alloc() functions are in PartitionPage.
- BASE_EXPORT NOINLINE void FreeSlowPath();
- ALWAYS_INLINE void Free(void* ptr);
-
- void Decommit(PartitionRootBase* root);
- void DecommitIfPossible(PartitionRootBase* root);
-
- // Pointer manipulation functions. These must be static as the input |page|
- // pointer may be the result of an offset calculation and therefore cannot
- // be trusted. The objective of these functions is to sanitize this input.
- ALWAYS_INLINE static void* ToPointer(const PartitionPage* page);
- ALWAYS_INLINE static PartitionPage* FromPointerNoAlignmentCheck(void* ptr);
- ALWAYS_INLINE static PartitionPage* FromPointer(void* ptr);
-
- ALWAYS_INLINE const size_t* get_raw_size_ptr() const;
- ALWAYS_INLINE size_t* get_raw_size_ptr() {
- return const_cast<size_t*>(
- const_cast<const PartitionPage*>(this)->get_raw_size_ptr());
- }
-
- ALWAYS_INLINE size_t get_raw_size() const;
- ALWAYS_INLINE void set_raw_size(size_t size);
-
- ALWAYS_INLINE void Reset();
-
- // TODO(ajwong): Can this be made private? https://crbug.com/787153
- BASE_EXPORT static PartitionPage* get_sentinel_page();
-
- // Page State accessors.
- // Note that it's only valid to call these functions on pages found on one of
- // the page lists. Specifically, you can't call these functions on full pages
- // that were detached from the active list.
- //
- // This restriction provides the flexibity for some of the status fields to
- // be repurposed when a page is taken off a list. See the negation of
- // |num_allocated_slots| when a full page is removed from the active list
- // for an example of such repurposing.
- ALWAYS_INLINE bool is_active() const;
- ALWAYS_INLINE bool is_full() const;
- ALWAYS_INLINE bool is_empty() const;
- ALWAYS_INLINE bool is_decommitted() const;
-
- private:
- // g_sentinel_page is used as a sentinel to indicate that there is no page
- // in the active page list. We can use nullptr, but in that case we need
- // to add a null-check branch to the hot allocation path. We want to avoid
- // that.
- //
- // Note, this declaration is kept in the header as opposed to an anonymous
- // namespace so the getter can be fully inlined.
- static PartitionPage sentinel_page_;
-};
-static_assert(sizeof(PartitionPage) <= kPageMetadataSize,
- "PartitionPage must be able to fit in a metadata slot");
-
-ALWAYS_INLINE char* PartitionSuperPageToMetadataArea(char* ptr) {
- uintptr_t pointer_as_uint = reinterpret_cast<uintptr_t>(ptr);
- DCHECK(!(pointer_as_uint & kSuperPageOffsetMask));
- // The metadata area is exactly one system page (the guard page) into the
- // super page.
- return reinterpret_cast<char*>(pointer_as_uint + kSystemPageSize);
-}
-
-ALWAYS_INLINE PartitionPage* PartitionPage::FromPointerNoAlignmentCheck(
- void* ptr) {
- uintptr_t pointer_as_uint = reinterpret_cast<uintptr_t>(ptr);
- char* super_page_ptr =
- reinterpret_cast<char*>(pointer_as_uint & kSuperPageBaseMask);
- uintptr_t partition_page_index =
- (pointer_as_uint & kSuperPageOffsetMask) >> kPartitionPageShift;
- // Index 0 is invalid because it is the metadata and guard area and
- // the last index is invalid because it is a guard page.
- DCHECK(partition_page_index);
- DCHECK(partition_page_index < kNumPartitionPagesPerSuperPage - 1);
- PartitionPage* page = reinterpret_cast<PartitionPage*>(
- PartitionSuperPageToMetadataArea(super_page_ptr) +
- (partition_page_index << kPageMetadataShift));
- // Partition pages in the same slot span can share the same page object.
- // Adjust for that.
- size_t delta = page->page_offset << kPageMetadataShift;
- page =
- reinterpret_cast<PartitionPage*>(reinterpret_cast<char*>(page) - delta);
- return page;
-}
-
-// Resturns start of the slot span for the PartitionPage.
-ALWAYS_INLINE void* PartitionPage::ToPointer(const PartitionPage* page) {
- uintptr_t pointer_as_uint = reinterpret_cast<uintptr_t>(page);
-
- uintptr_t super_page_offset = (pointer_as_uint & kSuperPageOffsetMask);
-
- // A valid |page| must be past the first guard System page and within
- // the following metadata region.
- DCHECK(super_page_offset > kSystemPageSize);
- // Must be less than total metadata region.
- DCHECK(super_page_offset < kSystemPageSize + (kNumPartitionPagesPerSuperPage *
- kPageMetadataSize));
- uintptr_t partition_page_index =
- (super_page_offset - kSystemPageSize) >> kPageMetadataShift;
- // Index 0 is invalid because it is the superpage extent metadata and the
- // last index is invalid because the whole PartitionPage is set as guard
- // pages for the metadata region.
- DCHECK(partition_page_index);
- DCHECK(partition_page_index < kNumPartitionPagesPerSuperPage - 1);
- uintptr_t super_page_base = (pointer_as_uint & kSuperPageBaseMask);
- void* ret = reinterpret_cast<void*>(
- super_page_base + (partition_page_index << kPartitionPageShift));
- return ret;
-}
-
-ALWAYS_INLINE PartitionPage* PartitionPage::FromPointer(void* ptr) {
- PartitionPage* page = PartitionPage::FromPointerNoAlignmentCheck(ptr);
- // Checks that the pointer is a multiple of bucket size.
- DCHECK(!((reinterpret_cast<uintptr_t>(ptr) -
- reinterpret_cast<uintptr_t>(PartitionPage::ToPointer(page))) %
- page->bucket->slot_size));
- return page;
-}
-
-ALWAYS_INLINE const size_t* PartitionPage::get_raw_size_ptr() const {
- // For single-slot buckets which span more than one partition page, we
- // have some spare metadata space to store the raw allocation size. We
- // can use this to report better statistics.
- if (bucket->slot_size <= kMaxSystemPagesPerSlotSpan * kSystemPageSize)
- return nullptr;
-
- DCHECK((bucket->slot_size % kSystemPageSize) == 0);
- DCHECK(bucket->is_direct_mapped() || bucket->get_slots_per_span() == 1);
-
- const PartitionPage* the_next_page = this + 1;
- return reinterpret_cast<const size_t*>(&the_next_page->freelist_head);
-}
-
-ALWAYS_INLINE size_t PartitionPage::get_raw_size() const {
- const size_t* ptr = get_raw_size_ptr();
- if (UNLIKELY(ptr != nullptr))
- return *ptr;
- return 0;
-}
-
-ALWAYS_INLINE void PartitionPage::Free(void* ptr) {
-// If these asserts fire, you probably corrupted memory.
-#if DCHECK_IS_ON()
- size_t slot_size = this->bucket->slot_size;
- size_t raw_size = get_raw_size();
- if (raw_size)
- slot_size = raw_size;
- PartitionCookieCheckValue(ptr);
- PartitionCookieCheckValue(reinterpret_cast<char*>(ptr) + slot_size -
- kCookieSize);
- memset(ptr, kFreedByte, slot_size);
-#endif
- DCHECK(this->num_allocated_slots);
- // TODO(palmer): See if we can afford to make this a CHECK.
- // FIX FIX FIX
- // DCHECK(!freelist_head || PartitionRootBase::IsValidPage(
- // PartitionPage::FromPointer(freelist_head)));
- CHECK(ptr != freelist_head); // Catches an immediate double free.
- // Look for double free one level deeper in debug.
- DCHECK(!freelist_head || ptr != internal::PartitionFreelistEntry::Transform(
- freelist_head->next));
- internal::PartitionFreelistEntry* entry =
- static_cast<internal::PartitionFreelistEntry*>(ptr);
- entry->next = internal::PartitionFreelistEntry::Transform(freelist_head);
- freelist_head = entry;
- --this->num_allocated_slots;
- if (UNLIKELY(this->num_allocated_slots <= 0)) {
- FreeSlowPath();
- } else {
- // All single-slot allocations must go through the slow path to
- // correctly update the size metadata.
- DCHECK(get_raw_size() == 0);
- }
-}
-
-ALWAYS_INLINE bool PartitionPage::is_active() const {
- DCHECK(this != get_sentinel_page());
- DCHECK(!page_offset);
- return (num_allocated_slots > 0 &&
- (freelist_head || num_unprovisioned_slots));
-}
-
-ALWAYS_INLINE bool PartitionPage::is_full() const {
- DCHECK(this != get_sentinel_page());
- DCHECK(!page_offset);
- bool ret = (num_allocated_slots == bucket->get_slots_per_span());
- if (ret) {
- DCHECK(!freelist_head);
- DCHECK(!num_unprovisioned_slots);
- }
- return ret;
-}
-
-ALWAYS_INLINE bool PartitionPage::is_empty() const {
- DCHECK(this != get_sentinel_page());
- DCHECK(!page_offset);
- return (!num_allocated_slots && freelist_head);
-}
-
-ALWAYS_INLINE bool PartitionPage::is_decommitted() const {
- DCHECK(this != get_sentinel_page());
- DCHECK(!page_offset);
- bool ret = (!num_allocated_slots && !freelist_head);
- if (ret) {
- DCHECK(!num_unprovisioned_slots);
- DCHECK(empty_cache_index == -1);
- }
- return ret;
-}
-
-ALWAYS_INLINE void PartitionPage::set_raw_size(size_t size) {
- size_t* raw_size_ptr = get_raw_size_ptr();
- if (UNLIKELY(raw_size_ptr != nullptr))
- *raw_size_ptr = size;
-}
-
-ALWAYS_INLINE void PartitionPage::Reset() {
- DCHECK(this->is_decommitted());
-
- num_unprovisioned_slots = bucket->get_slots_per_span();
- DCHECK(num_unprovisioned_slots);
-
- next_page = nullptr;
-}
-
-} // namespace internal
-} // namespace base
-
-#endif // BASE_ALLOCATOR_PARTITION_ALLOCATOR_PARTITION_PAGE_H_
diff --git a/base/allocator/partition_allocator/partition_root_base.cc b/base/allocator/partition_allocator/partition_root_base.cc
deleted file mode 100644
index db51d02..0000000
--- a/base/allocator/partition_allocator/partition_root_base.cc
+++ /dev/null
@@ -1,40 +0,0 @@
-// Copyright (c) 2018 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#include "base/allocator/partition_allocator/partition_root_base.h"
-
-#include "base/allocator/partition_allocator/oom.h"
-#include "base/allocator/partition_allocator/partition_oom.h"
-#include "base/allocator/partition_allocator/partition_page.h"
-#include "build_config.h"
-
-namespace base {
-namespace internal {
-
-NOINLINE void PartitionRootBase::OutOfMemory() {
-#if !defined(ARCH_CPU_64_BITS)
- // Check whether this OOM is due to a lot of super pages that are allocated
- // but not committed, probably due to http://crbug.com/421387.
- if (total_size_of_super_pages + total_size_of_direct_mapped_pages -
- total_size_of_committed_pages >
- kReasonableSizeOfUnusedPages) {
- PartitionOutOfMemoryWithLotsOfUncommitedPages();
- }
-#endif
- if (PartitionRootBase::gOomHandlingFunction)
- (*PartitionRootBase::gOomHandlingFunction)();
- OOM_CRASH();
-}
-
-void PartitionRootBase::DecommitEmptyPages() {
- for (size_t i = 0; i < kMaxFreeableSpans; ++i) {
- internal::PartitionPage* page = global_empty_page_ring[i];
- if (page)
- page->DecommitIfPossible(this);
- global_empty_page_ring[i] = nullptr;
- }
-}
-
-} // namespace internal
-} // namespace base
diff --git a/base/allocator/partition_allocator/partition_root_base.h b/base/allocator/partition_allocator/partition_root_base.h
deleted file mode 100644
index e20990e..0000000
--- a/base/allocator/partition_allocator/partition_root_base.h
+++ /dev/null
@@ -1,177 +0,0 @@
-// Copyright (c) 2018 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#ifndef BASE_ALLOCATOR_PARTITION_ALLOCATOR_PARTITION_ROOT_BASE_H_
-#define BASE_ALLOCATOR_PARTITION_ALLOCATOR_PARTITION_ROOT_BASE_H_
-
-#include "base/allocator/partition_allocator/page_allocator.h"
-#include "base/allocator/partition_allocator/partition_alloc_constants.h"
-#include "base/allocator/partition_allocator/partition_bucket.h"
-#include "base/allocator/partition_allocator/partition_direct_map_extent.h"
-#include "base/allocator/partition_allocator/partition_page.h"
-
-namespace base {
-namespace internal {
-
-struct PartitionPage;
-struct PartitionRootBase;
-
-// An "extent" is a span of consecutive superpages. We link to the partition's
-// next extent (if there is one) to the very start of a superpage's metadata
-// area.
-struct PartitionSuperPageExtentEntry {
- PartitionRootBase* root;
- char* super_page_base;
- char* super_pages_end;
- PartitionSuperPageExtentEntry* next;
-};
-static_assert(
- sizeof(PartitionSuperPageExtentEntry) <= kPageMetadataSize,
- "PartitionSuperPageExtentEntry must be able to fit in a metadata slot");
-
-struct BASE_EXPORT PartitionRootBase {
- PartitionRootBase();
- virtual ~PartitionRootBase();
- size_t total_size_of_committed_pages = 0;
- size_t total_size_of_super_pages = 0;
- size_t total_size_of_direct_mapped_pages = 0;
- // Invariant: total_size_of_committed_pages <=
- // total_size_of_super_pages +
- // total_size_of_direct_mapped_pages.
- unsigned num_buckets = 0;
- unsigned max_allocation = 0;
- bool initialized = false;
- char* next_super_page = nullptr;
- char* next_partition_page = nullptr;
- char* next_partition_page_end = nullptr;
- PartitionSuperPageExtentEntry* current_extent = nullptr;
- PartitionSuperPageExtentEntry* first_extent = nullptr;
- PartitionDirectMapExtent* direct_map_list = nullptr;
- PartitionPage* global_empty_page_ring[kMaxFreeableSpans] = {};
- int16_t global_empty_page_ring_index = 0;
- uintptr_t inverted_self = 0;
-
- // Public API
-
- // Allocates out of the given bucket. Properly, this function should probably
- // be in PartitionBucket, but because the implementation needs to be inlined
- // for performance, and because it needs to inspect PartitionPage,
- // it becomes impossible to have it in PartitionBucket as this causes a
- // cyclical dependency on PartitionPage function implementations.
- //
- // Moving it a layer lower couples PartitionRootBase and PartitionBucket, but
- // preserves the layering of the includes.
- //
- // Note the matching Free() functions are in PartitionPage.
- ALWAYS_INLINE void* AllocFromBucket(PartitionBucket* bucket,
- int flags,
- size_t size);
-
- ALWAYS_INLINE static bool IsValidPage(PartitionPage* page);
- ALWAYS_INLINE static PartitionRootBase* FromPage(PartitionPage* page);
-
- // gOomHandlingFunction is invoked when PartitionAlloc hits OutOfMemory.
- static void (*gOomHandlingFunction)();
- NOINLINE void OutOfMemory();
-
- ALWAYS_INLINE void IncreaseCommittedPages(size_t len);
- ALWAYS_INLINE void DecreaseCommittedPages(size_t len);
- ALWAYS_INLINE void DecommitSystemPages(void* address, size_t length);
- ALWAYS_INLINE void RecommitSystemPages(void* address, size_t length);
-
- void DecommitEmptyPages();
-};
-
-ALWAYS_INLINE void* PartitionRootBase::AllocFromBucket(PartitionBucket* bucket,
- int flags,
- size_t size) {
- PartitionPage* page = bucket->active_pages_head;
- // Check that this page is neither full nor freed.
- DCHECK(page->num_allocated_slots >= 0);
- void* ret = page->freelist_head;
- if (LIKELY(ret != 0)) {
- // If these DCHECKs fire, you probably corrupted memory.
- // TODO(palmer): See if we can afford to make this a CHECK.
- DCHECK(PartitionRootBase::IsValidPage(page));
- // All large allocations must go through the slow path to correctly
- // update the size metadata.
- DCHECK(page->get_raw_size() == 0);
- internal::PartitionFreelistEntry* new_head =
- internal::PartitionFreelistEntry::Transform(
- static_cast<internal::PartitionFreelistEntry*>(ret)->next);
- page->freelist_head = new_head;
- page->num_allocated_slots++;
- } else {
- ret = bucket->SlowPathAlloc(this, flags, size);
- // TODO(palmer): See if we can afford to make this a CHECK.
- DCHECK(!ret ||
- PartitionRootBase::IsValidPage(PartitionPage::FromPointer(ret)));
- }
-#if DCHECK_IS_ON()
- if (!ret)
- return 0;
- // Fill the uninitialized pattern, and write the cookies.
- page = PartitionPage::FromPointer(ret);
- // TODO(ajwong): Can |page->bucket| ever not be |this|? If not, can this just
- // be bucket->slot_size?
- size_t new_slot_size = page->bucket->slot_size;
- size_t raw_size = page->get_raw_size();
- if (raw_size) {
- DCHECK(raw_size == size);
- new_slot_size = raw_size;
- }
- size_t no_cookie_size = PartitionCookieSizeAdjustSubtract(new_slot_size);
- char* char_ret = static_cast<char*>(ret);
- // The value given to the application is actually just after the cookie.
- ret = char_ret + kCookieSize;
-
- // Debug fill region kUninitializedByte and surround it with 2 cookies.
- PartitionCookieWriteValue(char_ret);
- memset(ret, kUninitializedByte, no_cookie_size);
- PartitionCookieWriteValue(char_ret + kCookieSize + no_cookie_size);
-#endif
- return ret;
-}
-
-ALWAYS_INLINE bool PartitionRootBase::IsValidPage(PartitionPage* page) {
- PartitionRootBase* root = PartitionRootBase::FromPage(page);
- return root->inverted_self == ~reinterpret_cast<uintptr_t>(root);
-}
-
-ALWAYS_INLINE PartitionRootBase* PartitionRootBase::FromPage(
- PartitionPage* page) {
- PartitionSuperPageExtentEntry* extent_entry =
- reinterpret_cast<PartitionSuperPageExtentEntry*>(
- reinterpret_cast<uintptr_t>(page) & kSystemPageBaseMask);
- return extent_entry->root;
-}
-
-ALWAYS_INLINE void PartitionRootBase::IncreaseCommittedPages(size_t len) {
- total_size_of_committed_pages += len;
- DCHECK(total_size_of_committed_pages <=
- total_size_of_super_pages + total_size_of_direct_mapped_pages);
-}
-
-ALWAYS_INLINE void PartitionRootBase::DecreaseCommittedPages(size_t len) {
- total_size_of_committed_pages -= len;
- DCHECK(total_size_of_committed_pages <=
- total_size_of_super_pages + total_size_of_direct_mapped_pages);
-}
-
-ALWAYS_INLINE void PartitionRootBase::DecommitSystemPages(void* address,
- size_t length) {
- ::base::DecommitSystemPages(address, length);
- DecreaseCommittedPages(length);
-}
-
-ALWAYS_INLINE void PartitionRootBase::RecommitSystemPages(void* address,
- size_t length) {
- CHECK(::base::RecommitSystemPages(address, length, PageReadWrite));
- IncreaseCommittedPages(length);
-}
-
-} // namespace internal
-} // namespace base
-
-#endif // BASE_ALLOCATOR_PARTITION_ALLOCATOR_PARTITION_ROOT_BASE_H_
diff --git a/base/allocator/partition_allocator/spin_lock.cc b/base/allocator/partition_allocator/spin_lock.cc
deleted file mode 100644
index 0250c58..0000000
--- a/base/allocator/partition_allocator/spin_lock.cc
+++ /dev/null
@@ -1,104 +0,0 @@
-// Copyright 2015 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#include "base/allocator/partition_allocator/spin_lock.h"
-#include "build_config.h"
-
-#if defined(OS_WIN)
-#include <windows.h>
-#elif defined(OS_POSIX) || defined(OS_FUCHSIA)
-#include <sched.h>
-#endif
-
-#include "base/threading/platform_thread.h"
-
-// The YIELD_PROCESSOR macro wraps an architecture specific-instruction that
-// informs the processor we're in a busy wait, so it can handle the branch more
-// intelligently and e.g. reduce power to our core or give more resources to the
-// other hyper-thread on this core. See the following for context:
-// https://software.intel.com/en-us/articles/benefitting-power-and-performance-sleep-loops
-//
-// The YIELD_THREAD macro tells the OS to relinquish our quantum. This is
-// basically a worst-case fallback, and if you're hitting it with any frequency
-// you really should be using a proper lock (such as |base::Lock|)rather than
-// these spinlocks.
-#if defined(OS_WIN)
-
-#define YIELD_PROCESSOR YieldProcessor()
-#define YIELD_THREAD SwitchToThread()
-
-#elif defined(OS_POSIX) || defined(OS_FUCHSIA)
-
-#if defined(ARCH_CPU_X86_64) || defined(ARCH_CPU_X86)
-#define YIELD_PROCESSOR __asm__ __volatile__("pause")
-#elif (defined(ARCH_CPU_ARMEL) && __ARM_ARCH >= 6) || defined(ARCH_CPU_ARM64)
-#define YIELD_PROCESSOR __asm__ __volatile__("yield")
-#elif defined(ARCH_CPU_MIPSEL)
-// The MIPS32 docs state that the PAUSE instruction is a no-op on older
-// architectures (first added in MIPS32r2). To avoid assembler errors when
-// targeting pre-r2, we must encode the instruction manually.
-#define YIELD_PROCESSOR __asm__ __volatile__(".word 0x00000140")
-#elif defined(ARCH_CPU_MIPS64EL) && __mips_isa_rev >= 2
-// Don't bother doing using .word here since r2 is the lowest supported mips64
-// that Chromium supports.
-#define YIELD_PROCESSOR __asm__ __volatile__("pause")
-#elif defined(ARCH_CPU_PPC64_FAMILY)
-#define YIELD_PROCESSOR __asm__ __volatile__("or 31,31,31")
-#elif defined(ARCH_CPU_S390_FAMILY)
-// just do nothing
-#define YIELD_PROCESSOR ((void)0)
-#endif // ARCH
-
-#ifndef YIELD_PROCESSOR
-#warning "Processor yield not supported on this architecture."
-#define YIELD_PROCESSOR ((void)0)
-#endif
-
-#define YIELD_THREAD sched_yield()
-
-#else // Other OS
-
-#warning "Thread yield not supported on this OS."
-#define YIELD_THREAD ((void)0)
-
-#endif // OS_WIN
-
-namespace base {
-namespace subtle {
-
-void SpinLock::LockSlow() {
- // The value of |kYieldProcessorTries| is cargo culted from TCMalloc, Windows
- // critical section defaults, and various other recommendations.
- // TODO(jschuh): Further tuning may be warranted.
- static const int kYieldProcessorTries = 1000;
- // The value of |kYieldThreadTries| is completely made up.
- static const int kYieldThreadTries = 10;
- int yield_thread_count = 0;
- do {
- do {
- for (int count = 0; count < kYieldProcessorTries; ++count) {
- // Let the processor know we're spinning.
- YIELD_PROCESSOR;
- if (!lock_.load(std::memory_order_relaxed) &&
- LIKELY(!lock_.exchange(true, std::memory_order_acquire)))
- return;
- }
-
- if (yield_thread_count < kYieldThreadTries) {
- ++yield_thread_count;
- // Give the OS a chance to schedule something on this core.
- YIELD_THREAD;
- } else {
- // At this point, it's likely that the lock is held by a lower priority
- // thread that is unavailable to finish its work because of higher
- // priority threads spinning here. Sleeping should ensure that they make
- // progress.
- PlatformThread::Sleep(base::TimeDelta::FromMilliseconds(1));
- }
- } while (lock_.load(std::memory_order_relaxed));
- } while (UNLIKELY(lock_.exchange(true, std::memory_order_acquire)));
-}
-
-} // namespace subtle
-} // namespace base
diff --git a/base/allocator/partition_allocator/spin_lock.h b/base/allocator/partition_allocator/spin_lock.h
deleted file mode 100644
index e698b56..0000000
--- a/base/allocator/partition_allocator/spin_lock.h
+++ /dev/null
@@ -1,50 +0,0 @@
-// Copyright (c) 2013 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#ifndef BASE_ALLOCATOR_PARTITION_ALLOCATOR_SPIN_LOCK_H
-#define BASE_ALLOCATOR_PARTITION_ALLOCATOR_SPIN_LOCK_H
-
-#include <atomic>
-#include <memory>
-#include <mutex>
-
-#include "base/base_export.h"
-#include "base/compiler_specific.h"
-
-// Spinlock is a simple spinlock class based on the standard CPU primitive of
-// atomic increment and decrement of an int at a given memory address. These are
-// intended only for very short duration locks and assume a system with multiple
-// cores. For any potentially longer wait you should use a real lock, such as
-// |base::Lock|.
-namespace base {
-namespace subtle {
-
-class BASE_EXPORT SpinLock {
- public:
- constexpr SpinLock() = default;
- ~SpinLock() = default;
- using Guard = std::lock_guard<SpinLock>;
-
- ALWAYS_INLINE void lock() {
- static_assert(sizeof(lock_) == sizeof(int),
- "int and lock_ are different sizes");
- if (LIKELY(!lock_.exchange(true, std::memory_order_acquire)))
- return;
- LockSlow();
- }
-
- ALWAYS_INLINE void unlock() { lock_.store(false, std::memory_order_release); }
-
- private:
- // This is called if the initial attempt to acquire the lock fails. It's
- // slower, but has a much better scheduling and power consumption behavior.
- void LockSlow();
-
- std::atomic_int lock_{0};
-};
-
-} // namespace subtle
-} // namespace base
-
-#endif // BASE_ALLOCATOR_PARTITION_ALLOCATOR_SPIN_LOCK_H
diff --git a/base/allocator/unittest_utils.cc b/base/allocator/unittest_utils.cc
deleted file mode 100644
index 051d568..0000000
--- a/base/allocator/unittest_utils.cc
+++ /dev/null
@@ -1,19 +0,0 @@
-// Copyright (c) 2009 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-// The unittests need a this in order to link up without pulling in tons
-// of other libraries
-
-#include <config.h>
-#include <stddef.h>
-
-inline int snprintf(char* buffer, size_t count, const char* format, ...) {
- int result;
- va_list args;
- va_start(args, format);
- result = _vsnprintf(buffer, count, format, args);
- va_end(args);
- return result;
-}
-
diff --git a/base/allocator/winheap_stubs_win.cc b/base/allocator/winheap_stubs_win.cc
deleted file mode 100644
index 8aa5298..0000000
--- a/base/allocator/winheap_stubs_win.cc
+++ /dev/null
@@ -1,94 +0,0 @@
-// Copyright 2016 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-// This code should move into the default Windows shim once the win-specific
-// allocation shim has been removed, and the generic shim has becaome the
-// default.
-
-#include "winheap_stubs_win.h"
-
-#include <limits.h>
-#include <malloc.h>
-#include <new.h>
-#include <windows.h>
-
-namespace base {
-namespace allocator {
-
-bool g_is_win_shim_layer_initialized = false;
-
-namespace {
-
-const size_t kWindowsPageSize = 4096;
-const size_t kMaxWindowsAllocation = INT_MAX - kWindowsPageSize;
-
-inline HANDLE get_heap_handle() {
- return reinterpret_cast<HANDLE>(_get_heap_handle());
-}
-
-} // namespace
-
-void* WinHeapMalloc(size_t size) {
- if (size < kMaxWindowsAllocation)
- return HeapAlloc(get_heap_handle(), 0, size);
- return nullptr;
-}
-
-void WinHeapFree(void* ptr) {
- if (!ptr)
- return;
-
- HeapFree(get_heap_handle(), 0, ptr);
-}
-
-void* WinHeapRealloc(void* ptr, size_t size) {
- if (!ptr)
- return WinHeapMalloc(size);
- if (!size) {
- WinHeapFree(ptr);
- return nullptr;
- }
- if (size < kMaxWindowsAllocation)
- return HeapReAlloc(get_heap_handle(), 0, ptr, size);
- return nullptr;
-}
-
-size_t WinHeapGetSizeEstimate(void* ptr) {
- if (!ptr)
- return 0;
-
- // Get the user size of the allocation.
- size_t size = HeapSize(get_heap_handle(), 0, ptr);
-
- // Account for the 8-byte HEAP_HEADER preceding the block.
- size += 8;
-
-// Round up to the nearest allocation granularity, which is 8 for
-// 32 bit machines, and 16 for 64 bit machines.
-#if defined(ARCH_CPU_64_BITS)
- const size_t kAllocationGranularity = 16;
-#else
- const size_t kAllocationGranularity = 8;
-#endif
-
- return (size + kAllocationGranularity - 1) & ~(kAllocationGranularity - 1);
-}
-
-// Call the new handler, if one has been set.
-// Returns true on successfully calling the handler, false otherwise.
-bool WinCallNewHandler(size_t size) {
-#ifdef _CPPUNWIND
-#error "Exceptions in allocator shim are not supported!"
-#endif // _CPPUNWIND
- // Get the current new handler.
- _PNH nh = _query_new_handler();
- if (!nh)
- return false;
- // Since exceptions are disabled, we don't really know if new_handler
- // failed. Assume it will abort if it fails.
- return nh(size) ? true : false;
-}
-
-} // namespace allocator
-} // namespace base
diff --git a/base/allocator/winheap_stubs_win.h b/base/allocator/winheap_stubs_win.h
deleted file mode 100644
index 422dfe0..0000000
--- a/base/allocator/winheap_stubs_win.h
+++ /dev/null
@@ -1,38 +0,0 @@
-// Copyright 2016 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-// Thin allocation wrappers for the windows heap. This file should be deleted
-// once the win-specific allocation shim has been removed, and the generic shim
-// has becaome the default.
-
-#ifndef BASE_ALLOCATOR_WINHEAP_STUBS_H_
-#define BASE_ALLOCATOR_WINHEAP_STUBS_H_
-
-#include <stdint.h>
-
-namespace base {
-namespace allocator {
-
-// Set to true if the link-time magic has successfully hooked into the CRT's
-// heap initialization.
-extern bool g_is_win_shim_layer_initialized;
-
-// Thin wrappers to implement the standard C allocation semantics on the
-// CRT's Windows heap.
-void* WinHeapMalloc(size_t size);
-void WinHeapFree(void* ptr);
-void* WinHeapRealloc(void* ptr, size_t size);
-
-// Returns a lower-bound estimate for the full amount of memory consumed by the
-// the allocation |ptr|.
-size_t WinHeapGetSizeEstimate(void* ptr);
-
-// Call the new handler, if one has been set.
-// Returns true on successfully calling the handler, false otherwise.
-bool WinCallNewHandler(size_t size);
-
-} // namespace allocator
-} // namespace base
-
-#endif // BASE_ALLOCATOR_WINHEAP_STUBS_H_
\ No newline at end of file
diff --git a/base/debug/thread_heap_usage_tracker.cc b/base/debug/thread_heap_usage_tracker.cc
deleted file mode 100644
index 7dda2f7..0000000
--- a/base/debug/thread_heap_usage_tracker.cc
+++ /dev/null
@@ -1,331 +0,0 @@
-// Copyright 2016 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#include "base/debug/thread_heap_usage_tracker.h"
-
-#include <stdint.h>
-#include <algorithm>
-#include <limits>
-#include <new>
-#include <type_traits>
-
-#include "base/allocator/allocator_shim.h"
-#include "base/logging.h"
-#include "base/no_destructor.h"
-#include "base/threading/thread_local_storage.h"
-#include "build_config.h"
-
-#if defined(OS_MACOSX) || defined(OS_IOS)
-#include <malloc/malloc.h>
-#else
-#include <malloc.h>
-#endif
-
-namespace base {
-namespace debug {
-
-namespace {
-
-using base::allocator::AllocatorDispatch;
-
-const uintptr_t kSentinelMask = std::numeric_limits<uintptr_t>::max() - 1;
-ThreadHeapUsage* const kInitializationSentinel =
- reinterpret_cast<ThreadHeapUsage*>(kSentinelMask);
-ThreadHeapUsage* const kTeardownSentinel =
- reinterpret_cast<ThreadHeapUsage*>(kSentinelMask | 1);
-
-ThreadLocalStorage::Slot& ThreadAllocationUsage() {
- static NoDestructor<ThreadLocalStorage::Slot> thread_allocator_usage(
- [](void* thread_heap_usage) {
- // This destructor will be called twice. Once to destroy the actual
- // ThreadHeapUsage instance and a second time, immediately after, for
- // the sentinel. Re-setting the TLS slow (below) does re-initialize the
- // TLS slot. The ThreadLocalStorage code is designed to deal with this
- // use case and will re-call the destructor with the kTeardownSentinel
- // as arg.
- if (thread_heap_usage == kTeardownSentinel)
- return;
- DCHECK_NE(thread_heap_usage, kInitializationSentinel);
-
- // Deleting the ThreadHeapUsage TLS object will re-enter the shim and
- // hit RecordFree() (see below). The sentinel prevents RecordFree() from
- // re-creating another ThreadHeapUsage object.
- ThreadAllocationUsage().Set(kTeardownSentinel);
- delete static_cast<ThreadHeapUsage*>(thread_heap_usage);
- });
- return *thread_allocator_usage;
-}
-
-bool g_heap_tracking_enabled = false;
-
-// Forward declared as it needs to delegate memory allocation to the next
-// lower shim.
-ThreadHeapUsage* GetOrCreateThreadUsage();
-
-size_t GetAllocSizeEstimate(const AllocatorDispatch* next,
- void* ptr,
- void* context) {
- if (ptr == nullptr)
- return 0U;
-
- return next->get_size_estimate_function(next, ptr, context);
-}
-
-void RecordAlloc(const AllocatorDispatch* next,
- void* ptr,
- size_t size,
- void* context) {
- ThreadHeapUsage* usage = GetOrCreateThreadUsage();
- if (usage == nullptr)
- return;
-
- usage->alloc_ops++;
- size_t estimate = GetAllocSizeEstimate(next, ptr, context);
- if (size && estimate) {
- // Only keep track of the net number of bytes allocated in the scope if the
- // size estimate function returns sane values, e.g. non-zero.
- usage->alloc_bytes += estimate;
- usage->alloc_overhead_bytes += estimate - size;
-
- // Record the max outstanding number of bytes, but only if the difference
- // is net positive (e.g. more bytes allocated than freed in the scope).
- if (usage->alloc_bytes > usage->free_bytes) {
- uint64_t allocated_bytes = usage->alloc_bytes - usage->free_bytes;
- if (allocated_bytes > usage->max_allocated_bytes)
- usage->max_allocated_bytes = allocated_bytes;
- }
- } else {
- usage->alloc_bytes += size;
- }
-}
-
-void RecordFree(const AllocatorDispatch* next, void* ptr, void* context) {
- ThreadHeapUsage* usage = GetOrCreateThreadUsage();
- if (usage == nullptr)
- return;
-
- size_t estimate = GetAllocSizeEstimate(next, ptr, context);
- usage->free_ops++;
- usage->free_bytes += estimate;
-}
-
-void* AllocFn(const AllocatorDispatch* self, size_t size, void* context) {
- void* ret = self->next->alloc_function(self->next, size, context);
- if (ret != nullptr)
- RecordAlloc(self->next, ret, size, context);
-
- return ret;
-}
-
-void* AllocZeroInitializedFn(const AllocatorDispatch* self,
- size_t n,
- size_t size,
- void* context) {
- void* ret =
- self->next->alloc_zero_initialized_function(self->next, n, size, context);
- if (ret != nullptr)
- RecordAlloc(self->next, ret, size, context);
-
- return ret;
-}
-
-void* AllocAlignedFn(const AllocatorDispatch* self,
- size_t alignment,
- size_t size,
- void* context) {
- void* ret =
- self->next->alloc_aligned_function(self->next, alignment, size, context);
- if (ret != nullptr)
- RecordAlloc(self->next, ret, size, context);
-
- return ret;
-}
-
-void* ReallocFn(const AllocatorDispatch* self,
- void* address,
- size_t size,
- void* context) {
- if (address != nullptr)
- RecordFree(self->next, address, context);
-
- void* ret = self->next->realloc_function(self->next, address, size, context);
- if (ret != nullptr && size != 0)
- RecordAlloc(self->next, ret, size, context);
-
- return ret;
-}
-
-void FreeFn(const AllocatorDispatch* self, void* address, void* context) {
- if (address != nullptr)
- RecordFree(self->next, address, context);
- self->next->free_function(self->next, address, context);
-}
-
-size_t GetSizeEstimateFn(const AllocatorDispatch* self,
- void* address,
- void* context) {
- return self->next->get_size_estimate_function(self->next, address, context);
-}
-
-unsigned BatchMallocFn(const AllocatorDispatch* self,
- size_t size,
- void** results,
- unsigned num_requested,
- void* context) {
- unsigned count = self->next->batch_malloc_function(self->next, size, results,
- num_requested, context);
- for (unsigned i = 0; i < count; ++i) {
- RecordAlloc(self->next, results[i], size, context);
- }
- return count;
-}
-
-void BatchFreeFn(const AllocatorDispatch* self,
- void** to_be_freed,
- unsigned num_to_be_freed,
- void* context) {
- for (unsigned i = 0; i < num_to_be_freed; ++i) {
- if (to_be_freed[i] != nullptr) {
- RecordFree(self->next, to_be_freed[i], context);
- }
- }
- self->next->batch_free_function(self->next, to_be_freed, num_to_be_freed,
- context);
-}
-
-void FreeDefiniteSizeFn(const AllocatorDispatch* self,
- void* ptr,
- size_t size,
- void* context) {
- if (ptr != nullptr)
- RecordFree(self->next, ptr, context);
- self->next->free_definite_size_function(self->next, ptr, size, context);
-}
-
-// The allocator dispatch used to intercept heap operations.
-AllocatorDispatch allocator_dispatch = {&AllocFn,
- &AllocZeroInitializedFn,
- &AllocAlignedFn,
- &ReallocFn,
- &FreeFn,
- &GetSizeEstimateFn,
- &BatchMallocFn,
- &BatchFreeFn,
- &FreeDefiniteSizeFn,
- nullptr};
-
-ThreadHeapUsage* GetOrCreateThreadUsage() {
- auto tls_ptr = reinterpret_cast<uintptr_t>(ThreadAllocationUsage().Get());
- if ((tls_ptr & kSentinelMask) == kSentinelMask)
- return nullptr; // Re-entrancy case.
-
- auto* allocator_usage = reinterpret_cast<ThreadHeapUsage*>(tls_ptr);
- if (allocator_usage == nullptr) {
- // Prevent reentrancy due to the allocation below.
- ThreadAllocationUsage().Set(kInitializationSentinel);
-
- allocator_usage = new ThreadHeapUsage();
- static_assert(std::is_pod<ThreadHeapUsage>::value,
- "AllocatorDispatch must be POD");
- memset(allocator_usage, 0, sizeof(*allocator_usage));
- ThreadAllocationUsage().Set(allocator_usage);
- }
-
- return allocator_usage;
-}
-
-} // namespace
-
-ThreadHeapUsageTracker::ThreadHeapUsageTracker() : thread_usage_(nullptr) {
- static_assert(std::is_pod<ThreadHeapUsage>::value, "Must be POD.");
-}
-
-ThreadHeapUsageTracker::~ThreadHeapUsageTracker() {
- DCHECK(thread_checker_.CalledOnValidThread());
-
- if (thread_usage_ != nullptr) {
- // If this tracker wasn't stopped, make it inclusive so that the
- // usage isn't lost.
- Stop(false);
- }
-}
-
-void ThreadHeapUsageTracker::Start() {
- DCHECK(thread_checker_.CalledOnValidThread());
-
- thread_usage_ = GetOrCreateThreadUsage();
- usage_ = *thread_usage_;
-
- // Reset the stats for our current scope.
- // The per-thread usage instance now tracks this scope's usage, while this
- // instance persists the outer scope's usage stats. On destruction, this
- // instance will restore the outer scope's usage stats with this scope's
- // usage added.
- memset(thread_usage_, 0, sizeof(*thread_usage_));
-}
-
-void ThreadHeapUsageTracker::Stop(bool usage_is_exclusive) {
- DCHECK(thread_checker_.CalledOnValidThread());
- DCHECK_NE(nullptr, thread_usage_);
-
- ThreadHeapUsage current = *thread_usage_;
- if (usage_is_exclusive) {
- // Restore the outer scope.
- *thread_usage_ = usage_;
- } else {
- // Update the outer scope with the accrued inner usage.
- if (thread_usage_->max_allocated_bytes) {
- uint64_t outer_net_alloc_bytes = usage_.alloc_bytes - usage_.free_bytes;
-
- thread_usage_->max_allocated_bytes =
- std::max(usage_.max_allocated_bytes,
- outer_net_alloc_bytes + thread_usage_->max_allocated_bytes);
- }
-
- thread_usage_->alloc_ops += usage_.alloc_ops;
- thread_usage_->alloc_bytes += usage_.alloc_bytes;
- thread_usage_->alloc_overhead_bytes += usage_.alloc_overhead_bytes;
- thread_usage_->free_ops += usage_.free_ops;
- thread_usage_->free_bytes += usage_.free_bytes;
- }
-
- thread_usage_ = nullptr;
- usage_ = current;
-}
-
-ThreadHeapUsage ThreadHeapUsageTracker::GetUsageSnapshot() {
- ThreadHeapUsage* usage = GetOrCreateThreadUsage();
- DCHECK_NE(nullptr, usage);
- return *usage;
-}
-
-void ThreadHeapUsageTracker::EnableHeapTracking() {
- EnsureTLSInitialized();
-
- CHECK_EQ(false, g_heap_tracking_enabled) << "No double-enabling.";
- g_heap_tracking_enabled = true;
- CHECK(false) << "Can't enable heap tracking without the shim.";
-}
-
-bool ThreadHeapUsageTracker::IsHeapTrackingEnabled() {
- return g_heap_tracking_enabled;
-}
-
-void ThreadHeapUsageTracker::DisableHeapTrackingForTesting() {
- CHECK(false) << "Can't disable heap tracking without the shim.";
- DCHECK_EQ(true, g_heap_tracking_enabled) << "Heap tracking not enabled.";
- g_heap_tracking_enabled = false;
-}
-
-base::allocator::AllocatorDispatch*
-ThreadHeapUsageTracker::GetDispatchForTesting() {
- return &allocator_dispatch;
-}
-
-void ThreadHeapUsageTracker::EnsureTLSInitialized() {
- ignore_result(ThreadAllocationUsage());
-}
-
-} // namespace debug
-} // namespace base
diff --git a/base/debug/thread_heap_usage_tracker.h b/base/debug/thread_heap_usage_tracker.h
deleted file mode 100644
index 89166d0..0000000
--- a/base/debug/thread_heap_usage_tracker.h
+++ /dev/null
@@ -1,116 +0,0 @@
-// Copyright 2016 The Chromium Authors. All rights reserved.
-// Use of this source code is governed by a BSD-style license that can be
-// found in the LICENSE file.
-
-#ifndef BASE_DEBUG_THREAD_HEAP_USAGE_TRACKER_H_
-#define BASE_DEBUG_THREAD_HEAP_USAGE_TRACKER_H_
-
-#include <stdint.h>
-
-#include "base/base_export.h"
-#include "base/threading/thread_checker.h"
-
-namespace base {
-namespace allocator {
-struct AllocatorDispatch;
-} // namespace allocator
-
-namespace debug {
-
-// Used to store the heap allocator usage in a scope.
-struct ThreadHeapUsage {
- // The cumulative number of allocation operations.
- uint64_t alloc_ops;
-
- // The cumulative number of allocated bytes. Where available, this is
- // inclusive heap padding and estimated or actual heap overhead.
- uint64_t alloc_bytes;
-
- // Where available, cumulative number of heap padding and overhead bytes.
- uint64_t alloc_overhead_bytes;
-
- // The cumulative number of free operations.
- uint64_t free_ops;
-
- // The cumulative number of bytes freed.
- // Only recorded if the underlying heap shim can return the size of an
- // allocation.
- uint64_t free_bytes;
-
- // The maximal value of |alloc_bytes| - |free_bytes| seen for this thread.
- // Only recorded if the underlying heap shim supports returning the size of
- // an allocation.
- uint64_t max_allocated_bytes;
-};
-
-// By keeping a tally on heap operations, it's possible to track:
-// - the number of alloc/free operations, where a realloc is zero or one
-// of each, depending on the input parameters (see man realloc).
-// - the number of bytes allocated/freed.
-// - the number of estimated bytes of heap overhead used.
-// - the high-watermark amount of bytes allocated in the scope.
-// This in turn allows measuring the memory usage and memory usage churn over
-// a scope. Scopes must be cleanly nested, and each scope must be
-// destroyed on the thread where it's created.
-//
-// Note that this depends on the capabilities of the underlying heap shim. If
-// that shim can not yield a size estimate for an allocation, it's not possible
-// to keep track of overhead, freed bytes and the allocation high water mark.
-class BASE_EXPORT ThreadHeapUsageTracker {
- public:
- ThreadHeapUsageTracker();
- ~ThreadHeapUsageTracker();
-
- // Start tracking heap usage on this thread.
- // This may only be called on the thread where the instance is created.
- // Note IsHeapTrackingEnabled() must be true.
- void Start();
-
- // Stop tracking heap usage on this thread and store the usage tallied.
- // If |usage_is_exclusive| is true, the usage tallied won't be added to the
- // outer scope's usage. If |usage_is_exclusive| is false, the usage tallied
- // in this scope will also tally to any outer scope.
- // This may only be called on the thread where the instance is created.
- void Stop(bool usage_is_exclusive);
-
- // After Stop() returns the usage tallied from Start() to Stop().
- const ThreadHeapUsage& usage() const { return usage_; }
-
- // Returns this thread's heap usage from the start of the innermost
- // enclosing ThreadHeapUsageTracker instance, if any.
- static ThreadHeapUsage GetUsageSnapshot();
-
- // Enables the heap intercept. May only be called once, and only if the heap
- // shim is available, e.g. if BUILDFLAG(USE_ALLOCATOR_SHIM) is
- // true.
- static void EnableHeapTracking();
-
- // Returns true iff heap tracking is enabled.
- static bool IsHeapTrackingEnabled();
-
- protected:
- // Exposed for testing only - note that it's safe to re-EnableHeapTracking()
- // after calling this function in tests.
- static void DisableHeapTrackingForTesting();
-
- // Exposed for testing only.
- static void EnsureTLSInitialized();
-
- // Exposed to allow testing the shim without inserting it in the allocator
- // shim chain.
- static base::allocator::AllocatorDispatch* GetDispatchForTesting();
-
- private:
- ThreadChecker thread_checker_;
-
- // The heap usage at Start(), or the difference from Start() to Stop().
- ThreadHeapUsage usage_;
-
- // This thread's heap usage, non-null from Start() to Stop().
- ThreadHeapUsage* thread_usage_;
-};
-
-} // namespace debug
-} // namespace base
-
-#endif // BASE_DEBUG_THREAD_HEAP_USAGE_TRACKER_H_
diff --git a/base/process/memory_linux.cc b/base/process/memory_linux.cc
index 7c78cb8..7dece2a 100644
--- a/base/process/memory_linux.cc
+++ b/base/process/memory_linux.cc
@@ -8,7 +8,6 @@
#include <new>
-#include "base/allocator/allocator_shim.h"
#include "base/files/file_path.h"
#include "base/files/file_util.h"
#include "base/logging.h"
@@ -16,11 +15,6 @@
#include "base/strings/string_number_conversions.h"
#include "build_config.h"
-#if defined(USE_TCMALLOC)
-#include "third_party/tcmalloc/chromium/src/config.h"
-#include "third_party/tcmalloc/chromium/src/gperftools/tcmalloc.h"
-#endif
-
namespace base {
size_t g_oom_size = 0U;
@@ -50,11 +44,6 @@
std::set_new_handler(&OnNoMemory);
// If we're using glibc's allocator, the above functions will override
// malloc and friends and make them die on out of memory.
-
-#if defined(USE_TCMALLOC)
- // For tcmalloc, we need to tell it to behave like new.
- tc_set_new_mode(1);
-#endif
}
// NOTE: This is not the only version of this function in the source:
diff --git a/base/process/memory_mac.mm b/base/process/memory_mac.mm
index 6f51fbe..4fa31a1 100644
--- a/base/process/memory_mac.mm
+++ b/base/process/memory_mac.mm
@@ -4,8 +4,6 @@
#include "base/process/memory.h"
-#include "base/allocator/allocator_interception_mac.h"
-#include "base/allocator/allocator_shim.h"
#include "build_config.h"
namespace base {
@@ -23,11 +21,13 @@
}
bool UncheckedMalloc(size_t size, void** result) {
- return allocator::UncheckedMallocMac(size, result);
+ *result = malloc(size);
+ return *result != nullptr;
}
bool UncheckedCalloc(size_t num_items, size_t size, void** result) {
- return allocator::UncheckedCallocMac(num_items, size, result);
+ *result = calloc(num_items, size);
+ return *result != nullptr;
}
void EnableTerminationOnOutOfMemory() {
diff --git a/build/gen.py b/build/gen.py
index 3bf6fac..39257c9 100755
--- a/build/gen.py
+++ b/build/gen.py
@@ -168,10 +168,6 @@
include_dirs = [REPO_ROOT, os.path.join(REPO_ROOT, 'src')]
libs = []
- # //base/allocator/allocator_extension.cc needs this macro defined,
- # otherwise there would be link errors.
- cflags.extend(['-DNO_TCMALLOC', '-D__STDC_FORMAT_MACROS'])
-
if is_posix:
if options.debug:
cflags.extend(['-O0', '-g'])
@@ -229,8 +225,6 @@
static_libraries = {
'base': {'sources': [
- 'base/allocator/allocator_check.cc',
- 'base/allocator/allocator_extension.cc',
'base/at_exit.cc',
'base/base_paths.cc',
'base/base_switches.cc',
@@ -242,7 +236,6 @@
'base/debug/dump_without_crashing.cc',
'base/debug/stack_trace.cc',
'base/debug/task_annotator.cc',
- 'base/debug/thread_heap_usage_tracker.cc',
'base/environment.cc',
'base/files/file.cc',
'base/files/file_enumerator.cc',
@@ -621,10 +614,6 @@
'-lrt',
'-latomic',
])
- static_libraries['base']['sources'].extend([
- 'base/allocator/allocator_shim.cc',
- 'base/allocator/allocator_shim_default_dispatch_to_glibc.cc',
- ])
static_libraries['libevent']['include_dirs'].extend([
os.path.join(REPO_ROOT, 'base', 'third_party', 'libevent', 'linux')
])
@@ -673,9 +662,6 @@
if is_win:
static_libraries['base']['sources'].extend([
- "base/allocator/partition_allocator/address_space_randomization.cc",
- 'base/allocator/partition_allocator/page_allocator.cc',
- "base/allocator/partition_allocator/spin_lock.cc",
'base/base_paths_win.cc',
'base/cpu.cc',
'base/debug/close_handle_hook_win.cc',