From c5627153035f207a26b3bb60dd2e0d81f4b9a60c Mon Sep 17 00:00:00 2001 From: Nico Weber Date: Fri, 13 Sep 2019 14:58:24 +0000 Subject: [PATCH] Fix a few spellos in docs. (Trying to debug an incremental build thing on a bot...) git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@371860 91177308-0d34-0410-b5e6-96231b3b80d8 --- docs/BuildingADistribution.rst | 8 ++++---- docs/CommandGuide/llvm-nm.rst | 6 +++--- docs/LangRef.rst | 8 ++++---- docs/ORCv2.rst | 10 +++++----- docs/PDB/MsfFile.rst | 2 +- docs/SpeculativeLoadHardening.md | 4 ++-- docs/tutorial/MyFirstLanguageFrontend/LangImpl04.rst | 2 +- docs/tutorial/MyFirstLanguageFrontend/LangImpl07.rst | 2 +- 8 files changed, 21 insertions(+), 21 deletions(-) diff --git a/docs/BuildingADistribution.rst b/docs/BuildingADistribution.rst index 4c883366ba7..3051e08b6c8 100644 --- a/docs/BuildingADistribution.rst +++ b/docs/BuildingADistribution.rst @@ -132,10 +132,10 @@ the performance of the generated binaries. In addition to PGO profiling we also have limited support in-tree for generating linker order files. These files provide the linker with a suggested ordering for functions in the final binary layout. This can measurably speed up clang by -physically grouping functions that are called temporally close to eachother. The -current tooling is only available on Darwin systems with ``dtrace(1)``. It is -worth noting that dtrace is non-deterministic, and so the order file generation -using dtrace is also non-deterministic. +physically grouping functions that are called temporally close to each other. +The current tooling is only available on Darwin systems with ``dtrace(1)``. It +is worth noting that dtrace is non-deterministic, and so the order file +generation using dtrace is also non-deterministic. Options for Reducing Size ========================= diff --git a/docs/CommandGuide/llvm-nm.rst b/docs/CommandGuide/llvm-nm.rst index f071e1be1a5..1efa15e2dfa 100644 --- a/docs/CommandGuide/llvm-nm.rst +++ b/docs/CommandGuide/llvm-nm.rst @@ -34,7 +34,7 @@ a, A b, B - Unitialized data (bss) object. + Uninitialized data (bss) object. C @@ -90,7 +90,7 @@ V ELF: Defined weak object symbol. This definition will only be used if no regular definitions exist in a link. If multiple weak definitions and no - regular definitons exist, one of the weak definitions will be used. + regular definitions exist, one of the weak definitions will be used. w @@ -101,7 +101,7 @@ W Defined weak symbol other than an ELF object symbol. This definition will only be used if no regular definitions exist in a link. If multiple weak definitions - and no regular definitons exist, one of the weak definitions will be used. + and no regular definitions exist, one of the weak definitions will be used. \- diff --git a/docs/LangRef.rst b/docs/LangRef.rst index 5fbeb5f21fc..03b017c94b5 100644 --- a/docs/LangRef.rst +++ b/docs/LangRef.rst @@ -3521,7 +3521,7 @@ resulting assembly string is parsed by LLVM's integrated assembler unless it is disabled -- even when emitting a ``.s`` file -- and thus must contain assembly syntax known to LLVM. -LLVM also supports a few more substitions useful for writing inline assembly: +LLVM also supports a few more substitutions useful for writing inline assembly: - ``${:uid}``: Expands to a decimal integer unique to this inline assembly blob. This substitution is useful when declaring a local label. Many standard @@ -6518,7 +6518,7 @@ Where each VFuncId has the format: vFuncId: (TypeIdRef, offset: 16) Where each ``TypeIdRef`` refers to a :ref:`type id` -by summary id or ``GUID`` preceeded by a ``guid:`` tag. +by summary id or ``GUID`` preceded by a ``guid:`` tag. TypeCheckedLoadVCalls """"""""""""""""""""" @@ -11364,7 +11364,7 @@ privileges. The default behavior is to emit a call to ``__clear_cache`` from the run time library. -This instrinsic does *not* empty the instruction pipeline. Modifications +This intrinsic does *not* empty the instruction pipeline. Modifications of the current function are outside the scope of the intrinsic. '``llvm.instrprof.increment``' Intrinsic @@ -11439,7 +11439,7 @@ The last argument specifies the value of the increment of the counter variable. Semantics: """""""""" -See description of '``llvm.instrprof.increment``' instrinsic. +See description of '``llvm.instrprof.increment``' intrinsic. '``llvm.instrprof.value.profile``' Intrinsic diff --git a/docs/ORCv2.rst b/docs/ORCv2.rst index 6e630a7d54e..0a8788a6b3a 100644 --- a/docs/ORCv2.rst +++ b/docs/ORCv2.rst @@ -10,7 +10,7 @@ Introduction This document aims to provide a high-level overview of the design and implementation of the ORC JIT APIs. Except where otherwise stated, all -discussion applies to the design of the APIs as of LLVM verison 9 (ORCv2). +discussion applies to the design of the APIs as of LLVM version 9 (ORCv2). Use-cases ========= @@ -19,7 +19,7 @@ ORC provides a modular API for building JIT compilers. There are a range of use cases for such an API. For example: 1. The LLVM tutorials use a simple ORC-based JIT class to execute expressions -compiled from a toy languge: Kaleidoscope. +compiled from a toy language: Kaleidoscope. 2. The LLVM debugger, LLDB, uses a cross-compiling JIT for expression evaluation. In this use case, cross compilation allows expressions compiled @@ -31,7 +31,7 @@ optimizations within an existing JIT infrastructure. 4. In interpreters and REPLs, e.g. Cling (C++) and the Swift interpreter. -By adoping a modular, library-based design we aim to make ORC useful in as many +By adopting a modular, library-based design we aim to make ORC useful in as many of these contexts as possible. Features @@ -237,7 +237,7 @@ but they may also wrap a jit-linker directly (if the program representation backing the definitions is an object file), or may even be a class that writes bits directly into memory (for example, if the definitions are stubs). Materialization is the blanket term for any actions (compiling, linking, -splatting bits, registering with runtimes, etc.) that are requried to generate a +splatting bits, registering with runtimes, etc.) that are required to generate a symbol definition that is safe to call or access. As each materializer completes its work it notifies the JITDylib, which in turn @@ -495,7 +495,7 @@ or creating any Modules attached to it. E.g. TP.wait(); To make exclusive access to Modules easier to manage the ThreadSafeModule class -provides a convenince function, ``withModuleDo``, that implicitly (1) locks the +provides a convenience function, ``withModuleDo``, that implicitly (1) locks the associated context, (2) runs a given function object, (3) unlocks the context, and (3) returns the result generated by the function object. E.g. diff --git a/docs/PDB/MsfFile.rst b/docs/PDB/MsfFile.rst index 09eba4a84b7..810870c048e 100644 --- a/docs/PDB/MsfFile.rst +++ b/docs/PDB/MsfFile.rst @@ -104,7 +104,7 @@ write your new modified bitfield to FPM2, and vice versa. Only when you commit the file to disk do you need to swap the value in the SuperBlock to point to the new ``FreeBlockMapBlock``. -The Free Block Maps are stored as a series of single blocks thoughout the file +The Free Block Maps are stored as a series of single blocks throughout the file at intervals of BlockSize. Because each FPM block is of size ``BlockSize`` bytes, it contains 8 times as many bits as an interval has blocks. This means that the first block of each FPM refers to the first 8 intervals of the file diff --git a/docs/SpeculativeLoadHardening.md b/docs/SpeculativeLoadHardening.md index de6dc015c57..50b9ea39a42 100644 --- a/docs/SpeculativeLoadHardening.md +++ b/docs/SpeculativeLoadHardening.md @@ -511,7 +511,7 @@ Once we have the predicate accumulated into a special value for correct vs. misspeculated, we need to apply this to loads in a way that ensures they do not leak secret data. There are two primary techniques for this: we can either harden the loaded value to prevent observation, or we can harden the address -itself to prevent the load from occuring. These have significantly different +itself to prevent the load from occurring. These have significantly different performance tradeoffs. @@ -942,7 +942,7 @@ We can use this broader barrier to speculative loads executing between functions. We emit it in the entry block to handle calls, and prior to each return. This approach also has the advantage of providing the strongest degree of mitigation when mixed with unmitigated code by halting all misspeculation -entering a function which is mitigated, regardless of what occured in the +entering a function which is mitigated, regardless of what occurred in the caller. However, such a mixture is inherently more risky. Whether this kind of mixture is a sufficient mitigation requires careful analysis. diff --git a/docs/tutorial/MyFirstLanguageFrontend/LangImpl04.rst b/docs/tutorial/MyFirstLanguageFrontend/LangImpl04.rst index f5a46a68fcf..bf4e2398d28 100644 --- a/docs/tutorial/MyFirstLanguageFrontend/LangImpl04.rst +++ b/docs/tutorial/MyFirstLanguageFrontend/LangImpl04.rst @@ -318,7 +318,7 @@ look like this: TheJIT->removeModule(H); } -If parsing and codegen succeeed, the next step is to add the module containing +If parsing and codegen succeed, the next step is to add the module containing the top-level expression to the JIT. We do this by calling addModule, which triggers code generation for all the functions in the module, and returns a handle that can be used to remove the module from the JIT later. Once the module diff --git a/docs/tutorial/MyFirstLanguageFrontend/LangImpl07.rst b/docs/tutorial/MyFirstLanguageFrontend/LangImpl07.rst index 218e4419135..31e2ffb1690 100644 --- a/docs/tutorial/MyFirstLanguageFrontend/LangImpl07.rst +++ b/docs/tutorial/MyFirstLanguageFrontend/LangImpl07.rst @@ -520,7 +520,7 @@ Here is the code after the mem2reg pass runs: This is a trivial case for mem2reg, since there are no redefinitions of the variable. The point of showing this is to calm your tension about -inserting such blatent inefficiencies :). +inserting such blatant inefficiencies :). After the rest of the optimizers run, we get: -- 2.50.1