* If xmin_status happens to be XID_IS_CURRENT_XID, then in theory
* any such DDL changes ought to be visible to us, so perhaps
* we could check anyway in that case. But, for now, let's be
- * conservate and treat this like any other uncommitted insert.
+ * conservative and treat this like any other uncommitted insert.
*/
return false;
}
for <xref linkend="sql-altertable"/>. When set to a positive value,
each block range is assumed to contain this number of distinct non-null
values. When set to a negative value, which must be greater than or
- equal to -1, the number of distinct non-null is assumed linear with
+ equal to -1, the number of distinct non-null values is assumed to grow linearly with
the maximum possible number of tuples in the block range (about 290
rows per block). The default value is <literal>-0.1</literal>, and
the minimum number of distinct non-null values is <literal>16</literal>.
Returns whether all the ScanKey entries are consistent with the given
indexed values for a range.
The attribute number to use is passed as part of the scan key.
- Multiple scan keys for the same attribute may be passed at once, the
+ Multiple scan keys for the same attribute may be passed at once; the
number of entries is determined by the <literal>nkeys</literal> parameter.
</para>
</listitem>
<para>
The minmax-multi operator class is also intended for data types implementing
- a totally ordered sets, and may be seen as a simple extension of the minmax
+ a totally ordered set, and may be seen as a simple extension of the minmax
operator class. While minmax operator class summarizes values from each block
range into a single contiguous interval, minmax-multi allows summarization
into multiple smaller intervals to improve handling of outlier values.
</para>
<para>
- The third option is to declare sql identifier linked to
+ The third option is to declare a SQL identifier linked to
the connection, for example:
<programlisting>
EXEC SQL AT <replaceable>connection-name</replaceable> DECLARE <replaceable>statement-name</replaceable> STATEMENT;
EXEC SQL PREPARE <replaceable>statement-name</replaceable> FROM :<replaceable>dyn-string</replaceable>;
</programlisting>
- Once you link a sql identifier to a connection, you execute a dynamic SQL
- without AT clause. Note that this option behaves like preprocessor directives,
- therefore the link is enabled only in the file.
+ Once you link a SQL identifier to a connection, you execute dynamic SQL
+ without an AT clause. Note that this option behaves like preprocessor
+ directives, therefore the link is enabled only in the file.
</para>
<para>
Here is an example program using this option:
<title>Description</title>
<para>
- <command>DECLARE STATEMENT</command> declares SQL statement identifier.
+ <command>DECLARE STATEMENT</command> declares a SQL statement identifier.
SQL statement identifier can be associated with the connection.
- When the identifier is used by dynamic SQL statements, these SQLs are executed
- by using the associated connection.
- The namespace of the declaration is the precompile unit, and multiple declarations to
- the same SQL statement identifier is not allowed.
-
- Note that if the precompiler run in the Informix compatibility mode and some SQL statement
- is declared, "database" can not be used as a cursor name.
+ When the identifier is used by dynamic SQL statements, the statements
+ are executed using the associated connection.
+ The namespace of the declaration is the precompile unit, and multiple
+ declarations to the same SQL statement identifier are not allowed.
+ Note that if the precompiler runs in Informix compatibility mode and
+ some SQL statement is declared, "database" can not be used as a cursor
+ name.
</para>
</refsect1>
* and if we're violating them. In that case we can
* terminate early, without invoking the support function.
*
- * As there may be more keys, we can only detemine
+ * As there may be more keys, we can only determine
* mismatch within this loop.
*/
if (bdesc->bd_info[attno - 1]->oi_regular_nulls &&
/*
* Collation from the first key (has to be the same for
- * all keys for the same attribue).
+ * all keys for the same attribute).
*/
collation = keys[attno - 1][0]->sk_collation;
{
/*
* XXX At this point we only need a single proc (to compute the hash), but
- * let's keep the array just like inclusion and minman opclasses, for
+ * let's keep the array just like inclusion and minmax opclasses, for
* consistency. We may need additional procs in the future.
*/
FmgrInfo extra_procinfos[BLOOM_MAX_PROCNUMS];
} DistanceValue;
-/* Cache for support and strategy procesures. */
+/* Cache for support and strategy procedures. */
static FmgrInfo *minmax_multi_get_procinfo(BrinDesc *bdesc, uint16 attno,
uint16 procnum);
}
/*
- * Given an array of expanded ranges, compute distance of the gaps betwen
+ * Given an array of expanded ranges, compute distance of the gaps between
* the ranges - for ncranges there are (ncranges-1) gaps.
*
* We simply call the "distance" function to compute the (max-min) for pairs
*
* We don't simply check against range->maxvalues again. The deduplication
* might have freed very little space (e.g. just one value), forcing us to
- * do depuplication very often. In that case it's better to do compaction
+ * do deduplication very often. In that case it's better to do compaction
* and reduce more space.
*/
if (2 * range->nranges + range->nvalues <= range->maxvalues * MINMAX_BUFFER_LOAD_FACTOR)
/*
* In sorted build, we use a stack of these structs, one for each level,
- * to hold an in-memory buffer of the righmost page at the level. When the
+ * to hold an in-memory buffer of the rightmost page at the level. When the
* page fills up, it is written out and a new page is allocated.
*/
typedef struct GistSortedBuildPageState
* Currently we do not support non-index-based scans here. (In principle
* we could do a heapscan and sort, but the uses are in places that
* probably don't need to still work with corrupted catalog indexes.)
- * For the moment, therefore, these functions are merely the thinnest of
+ * For the moment, therefore, these functions are merely the thinest of
* wrappers around index_beginscan/index_getnext_slot. The main reason for
* their existence is to centralize possible future support of lossy operators
* in catalog scans.
* _bt_delitems_delete. These steps must take place before each function's
* critical section begins.
*
- * updatabable and nupdatable are inputs, though note that we will use
+ * updatable and nupdatable are inputs, though note that we will use
* _bt_update_posting() to replace the original itup with a pointer to a final
* version in palloc()'d memory. Caller should free the tuples when its done.
*
* some extra index tuples that were practically free for tableam to check in
* passing (when they actually turn out to be safe to delete). It probably
* only makes sense for the tableam to go ahead with these extra checks when
- * it is block-orientated (otherwise the checks probably won't be practically
+ * it is block-oriented (otherwise the checks probably won't be practically
* free, which we rely on). The tableam interface requires the tableam side
* to handle the problem, though, so this is okay (we as an index AM are free
* to make the simplifying assumption that all tableams must be block-based).
* makeUniqueTypeName
* Generate a unique name for a prospective new type
*
- * Given a typeName, return a new palloc'ed name by preprending underscores
+ * Given a typeName, return a new palloc'ed name by prepending underscores
* until a non-conflicting name results.
*
* If tryOriginal, first try with zero underscores.
{
/*
* Partitioned tables don't have storage, so we don't set any fields in
- * their pg_class entries except for relpages, which is necessary for
+ * their pg_class entries except for reltuples, which is necessary for
* auto-analyze to work properly.
*/
vac_update_relstats(onerel, -1, totalrows,
/*
* We're in full sort mode accumulating a minimum number of tuples
* and not checking for prefix key equality yet, so we can't
- * assume the group pivot tuple will reamin the same -- unless
+ * assume the group pivot tuple will remain the same -- unless
* we're using a minimum group size of 1, in which case the pivot
- * is obviously still the pviot.
+ * is obviously still the pivot.
*/
if (nTuples != minGroupSize)
ExecClearTuple(node->group_pivot);
}
/*
- * If chgParam of subnode is not null, theni the plan will be re-scanned
+ * If chgParam of subnode is not null, then the plan will be re-scanned
* by the first ExecProcNode.
*/
if (outerPlan->chgParam == NULL)
* SQL standard actually does it in that more complicated way), but the
* internal representation allows us to construct it this way.)
*
- * With a search caluse
+ * With a search clause
*
* SEARCH DEPTH FIRST BY col1, col2 SET sqc
*
/*
* clauselist_apply_dependencies
* Apply the specified functional dependencies to a list of clauses and
- * return the estimated selecvitity of the clauses that are compatible
+ * return the estimated selectivity of the clauses that are compatible
* with any of the given dependencies.
*
* This will estimate all not-already-estimated clauses that are compatible
if (!bms_is_member(listidx, *estimatedclauses))
{
/*
- * If it's a simple column refrence, just extract the attnum. If
+ * If it's a simple column reference, just extract the attnum. If
* it's an expression, assign a negative attnum as if it was a
* system attribute.
*/
*/
for (i = 0; i < nattrs; i++)
{
- /* keep the maximmum statistics target */
+ /* keep the maximum statistics target */
if (stats[i]->attr->attstattarget > stattarget)
stattarget = stats[i]->attr->attstattarget;
}
* older than this are known not running any more.
*
* And try to advance the bounds of GlobalVis{Shared,Catalog,Data,Temp}Rels
- * for the benefit of theGlobalVisTest* family of functions.
+ * for the benefit of the GlobalVisTest* family of functions.
*
* Note: this function should probably not be called with an argument that's
* not statically allocated (see xip allocation below).
(const unsigned char *) Affix->repl,
(ptr - 1)->len))
{
- /* leave only unique and minimals suffixes */
+ /* leave only unique and minimal suffixes */
ptr->affix = Affix->repl;
ptr->len = Affix->replen;
ptr->issuffix = issuffix;
if (!MyBEEntry)
return 0;
- /* There's no need for a look around pgstat_begin_read_activity /
+ /* There's no need for a lock around pgstat_begin_read_activity /
* pgstat_end_read_activity here as it's only called from
* pg_stat_get_activity which is already protected, or from the same
- * backend which mean that there won't be concurrent write.
+ * backend which means that there won't be concurrent writes.
*/
return MyBEEntry->st_queryid;
}
/*
- * Estimate size occupied by serialized multirage.
+ * Estimate size occupied by serialized multirange.
*/
static Size
multirange_size_estimate(TypeCacheEntry *rangetyp, int32 range_count,
/*
* Process a simple Var expression, by matching it to keys
- * directly. If there's a matchine expression, we'll try
+ * directly. If there's a matching expression, we'll try
* matching it later.
*/
if (IsA(varinfo->var, Var))
* and the target. But if the source is a standby server, it's possible
* that the last common checkpoint is *after* the standby's restartpoint.
* That implies that the source server has applied the checkpoint record,
- * but hasn't perfomed a corresponding restartpoint yet. Make sure we
+ * but hasn't performed a corresponding restartpoint yet. Make sure we
* start at the restartpoint's redo point in that case.
*
* Use the old version of the source's control file for this. The server
}
/*
- * pg_waldump's WAL page rader
+ * pg_waldump's WAL page reader
*
* timeline and startptr specifies the LSN, and reads up to endptr.
*/
/*
* In backend, use an allocation in TopMemoryContext to count for resowner
- * cleanup handling if necesary. For versions of OpenSSL where HMAC_CTX is
+ * cleanup handling if necessary. For versions of OpenSSL where HMAC_CTX is
* known, just use palloc(). In frontend, use malloc to be able to return
* a failure status back to the caller.
*/
*
* For each subsequent entry in the history list, the "good_match"
* is lowered by 10%. So the compressor will be more happy with
- * short matches the farer it has to go back in the history.
+ * short matches the further it has to go back in the history.
* Another "speed against ratio" preference characteristic of
* the algorithm.
*
}
cur = NULL;
- /* remove old delared statements if any are still there */
+ /* remove old declared statements if any are still there */
for (list = g_declared_list; list != NULL;)
{
struct declared_list *this = list;
* is odd, moving left simply involves halving lim: e.g., when lim
* is 5 we look at item 2, so we change lim to 2 so that we will
* look at items 0 & 1. If lim is even, the same applies. If lim
- * is odd, moving right again involes halving lim, this time moving
+ * is odd, moving right again involves halving lim, this time moving
* the base up one item past p: e.g., when lim is 5 we change base
* to item 3 and make lim 2 so that we will look at items 3 and 4.
* If lim is even, however, we have to shrink it by one before