forked from postgres/postgres
-
Notifications
You must be signed in to change notification settings - Fork 0
Sync Fork from Upstream Repo #145
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Two functions multirange_range_overlaps_bsearch_comparison() and multirange_range_contains_bsearch_comparison() contain bugs of returning -1 instead of 1. This commit fixes these bugs and adds corresponding regression tests.
We have operators for checking if the multirange contains a range but don't have the opposite. This commit improves completeness of the operator set by adding two new operators: @> (anyrange,anymultirange) and <@(anymultirange,anyrange). Catversion is bumped.
There is a set of *_internal() functions exposed in include/utils/multirangetypes.h. This commit improves the signatures of these functions in two ways. * Add const qualifies where applicable. * Replace multirange typecache argument with range typecache argument. Multirange typecache was used solely to find the range typecache. At the same time, range typecache is easier for the caller to find.
6df7a96 has introduced a set of operators between ranges and multiranges. Existing GiST indexes for ranges could easily support majority of them. This commit adds support for new operators to the existing range GiST indexes. New operators resides the same strategy numbers as existing ones. Appropriate check function is determined using the subtype. Catversion is bumped.
When the postmaster sends SIGQUIT to its children, there's no real need for all the children to log that fact; the postmaster already made a log entry about it, so adding perhaps dozens or hundreds of child-process log entries adds nothing of value. So, let's introduce a new ereport level to specify "WARNING, but never send to log" and use that for these messages. Such a change wouldn't have been desirable before commit 7e784d1, because if someone manually SIGQUIT's a backend, we *do* want to log that. But now we can tell the difference between a signal that was issued by the postmaster and one that was not with reasonable certainty. While we're here, also clear error_context_stack before ereport'ing, to prevent error callbacks from being invoked in the signal-handler context. This should reduce the odds of getting hung up while trying to notify the client. Per a suggestion from Andres Freund. Discussion: https://postgr.es/m/20201225230331.hru3u6obyy6j53tk@alap3.anarazel.de
Addition of multirange info to tables 8.27 and 65.1 made them start throwing "exceed the available area" warnings in PDF docs builds. For 8.27, twiddling the existing column width hints was enough to fix this. For 65.1, I twiddled the widths a little, but to really fix it I had to insert a space after each comma in the table, to allow a line break to occur there. (This seemed easier to read and maintain than the alternative of inserting &zwsp; entities.) Per buildfarm.
Those two modules included references to libpq's source path, without using anything from libpq. Some copy-pastos done when each module was created are likely at the origin of those useless references (aecf5ee for old_snapshot, fe59e56 for adminpack). Reviewed-by: Tom Lane, David Rowley Discussion: https://postgr.es/m/X+LQpfLyk7jgzUki@paquier.xyz
This suffices for testing v12 -> v13; some other version pairs need more changes. Back-patch to v10, which removed the function.
Commit 1ed6b89 eliminated support for them, so drop them from regression databases before upgrading. This is necessary but not sufficient for testing v13 -> v14 upgrades. Discussion: https://postgr.es/m/449144.1600439950@sss.pgh.pa.us
This adds six methods to the output plugin API, adding support for streaming changes of two-phase transactions at prepare time. * begin_prepare * filter_prepare * prepare * commit_prepared * rollback_prepared * stream_prepare Most of this is a simple extension of the existing methods, with the semantic difference that the transaction is not yet committed and maybe aborted later. Until now two-phase transactions were translated into regular transactions on the subscriber, and the GID was not forwarded to it. None of the two-phase commands were communicated to the subscriber. This patch provides the infrastructure for logical decoding plugins to be informed of two-phase commands Like PREPARE TRANSACTION, COMMIT PREPARED and ROLLBACK PREPARED commands with the corresponding GID. This also extends the 'test_decoding' plugin, implementing these new methods. This commit simply adds these new APIs and the upcoming patch to "allow the decoding at prepare time in ReorderBuffer" will use these APIs. Author: Ajin Cherian and Amit Kapila based on previous work by Nikhil Sontakke and Stas Kelvich Reviewed-by: Amit Kapila, Peter Smith, Sawada Masahiko, and Dilip Kumar Discussion: https://postgr.es/m/02DA5F5E-CECE-4D9C-8B4B-418077E2C010@postgrespro.ru https://postgr.es/m/CAMGcDxeqEpWj3fTXwqhSwBdXd2RS9jzwWscO-XbeCfso6ts3+Q@mail.gmail.com
IF NOT EXISTS was ignored when specified in an EXPLAIN query for CREATE MATERIALIZED VIEW or CREATE TABLE AS. Hence, if this clause was specified, the caller would get a failure if the relation already exists instead of a success with a NOTICE message. This commit makes the behavior of IF NOT EXISTS in EXPLAIN consistent with the non-EXPLAIN'd DDL queries, preventing a failure with IF NOT EXISTS if the relation to-be-created already exists. The skip is done before the SELECT query used for the relation is planned or executed, and a "dummy" plan is generated instead depending on the format used by EXPLAIN. Author: Bharath Rupireddy Reviewed-by: Zhijie Hou, Michael Paquier Discussion: https://postgr.es/m/CALj2ACVa3oJ9O_wcGd+FtHWZds04dEKcakxphGz5POVgD4wC7Q@mail.gmail.com
secure_open_gssapi() installed the krb_server_keyfile setting as KRB5_KTNAME unconditionally, so long as it's not empty. However, pg_GSS_recvauth() only installed it if KRB5_KTNAME wasn't set already, leading to a troubling inconsistency: in theory, clients could see different sets of server principal names depending on whether they use GSSAPI encryption. Always using krb_server_keyfile seems like the right thing, so make both places do that. Also fix up secure_open_gssapi()'s lack of a check for setenv() failure --- it's unlikely, surely, but security-critical actions are no place to be sloppy. Also improve the associated documentation. This patch does nothing about secure_open_gssapi()'s use of setenv(), and indeed causes pg_GSS_recvauth() to use it too. That's nominally against project portability rules, but since this code is only built with --with-gssapi, I do not feel a need to do something about this in the back branches. A fix will be forthcoming for HEAD though. Back-patch to v12 where GSSAPI encryption was introduced. The dubious behavior in pg_GSS_recvauth() goes back further, but it didn't have anything to be inconsistent with, so let it be. Discussion: https://postgr.es/m/2187460.1609263156@sss.pgh.pa.us
Attempt to get selectivity estimation for @> (anymultirange, anyrange) operator caused an error in buildfarm, because this operator was missed in switch() of calc_hist_selectivity(). Fix that and also make regression tests reliably check that selectivity estimation for (multi)ranges doesn't fall. Previously, whether we test selectivity estimation for (multi)ranges depended on whether autovacuum managed to gather concurrently to the test. Reported-by: Michael Paquier Discussion: https://postgr.es/m/X%2BwmgjRItuvHNBeV%40paquier.xyz
Since at least 2001 we've used putenv() and avoided setenv(), on the grounds that the latter was unportable and not in POSIX. However, POSIX added it that same year, and by now the situation has reversed: setenv() is probably more portable than putenv(), since POSIX now treats the latter as not being a core function. And setenv() has cleaner semantics too. So, let's reverse that old policy. This commit adds a simple src/port/ implementation of setenv() for any stragglers (we have one in the buildfarm, but I'd not be surprised if that code is never used in the field). More importantly, extend win32env.c to also support setenv(). Then, replace usages of putenv() with setenv(), and get rid of some ad-hoc implementations of setenv() wannabees. Also, adjust our src/port/ implementation of unsetenv() to follow the POSIX spec that it returns an error indicator, rather than returning void as per the ancient BSD convention. I don't feel a need to make all the call sites check for errors, but the portability stub ought to match real-world practice. Discussion: https://postgr.es/m/2065122.1609212051@sss.pgh.pa.us
This commit preserves the logic of multirange_in() but makes it more clear what's going on. Also, this commit fixes the compiler warning spotted by the buildfarm. Reported-by: Tom Lane Discussion: https://postgr.es/m/2246043.1609290699%40sss.pgh.pa.us
The behavior of cross-type comparisons among date/time data types was not really explained anywhere. You could probably infer it if you recognized the applicability of comments elsewhere about datatype conversions, but it seems worthy of explicit documentation. Per bug #16797 from Dana Burd. Discussion: https://postgr.es/m/16797-f264b0b980b53b8b@postgresql.org
The logic for determining the latest removed XID for the purposes of generating recovery conflicts in REDO routines was subtly broken. It failed to follow links from HOT chains, and so failed to consider all relevant heap tuple headers in some cases. To fix, expand the loop that deals with LP_REDIRECT line pointers to also deal with HOT chains. The new version of the loop is loosely based on a similar loop from heap_prune_chain(). The impact of this bug is probably quite limited, since the horizon code necessarily deals with heap tuples that are pointed to by LP_DEAD-set index tuples. The process of setting LP_DEAD index tuples (e.g. within the kill_prior_tuple mechanism) is highly correlated with opportunistic pruning of pointed-to heap tuples. Plus the question of generating a recovery conflict usually comes up some time after index tuple LP_DEAD bits were initially set, unlike heap pruning, where a latestRemovedXid is generated at the point of the pruning operation (heap pruning has no deferred "would-be page split" style processing that produces conflicts lazily). Only backpatch to Postgres 12, the first version where this logic runs during original execution (following commit 558a916). The index latestRemovedXid mechanism has had the same bug since it first appeared over 10 years ago (in commit a760893), but backpatching to all supported versions now seems like a bad idea on balance. Running the new improved code during recovery seems risky, especially given the lack of complaints from the field. Author: Peter Geoghegan <pg@bowt.ie> Discussion: https://postgr.es/m/CAH2-Wz=Eib393+HHcERK_9MtgNS7Ew1HY=RDC_g6GL46zM5C6Q@mail.gmail.com Backpatch: 12-
Try to be clearer about what computation is actually happening here. Per bug #16797 from Dana Burd. Discussion: https://postgr.es/m/16797-f264b0b980b53b8b@postgresql.org
Backpatch-through: 9.5
This patch allows PREPARE-time decoding of two-phase transactions (if the output plugin supports this capability), in which case the transactions are replayed at PREPARE and then committed later when COMMIT PREPARED arrives. Now that we decode the changes before the commit, the concurrent aborts may cause failures when the output plugin consults catalogs (both system and user-defined). We detect such failures with a special sqlerrcode ERRCODE_TRANSACTION_ROLLBACK introduced by commit 7259736 and stop decoding the remaining changes. Then we rollback the changes when rollback prepared is encountered. Author: Ajin Cherian and Amit Kapila based on previous work by Nikhil Sontakke and Stas Kelvich Reviewed-by: Amit Kapila, Peter Smith, Sawada Masahiko, Arseny Sher, and Dilip Kumar Tested-by: Takamichi Osumi Discussion: https://postgr.es/m/02DA5F5E-CECE-4D9C-8B4B-418077E2C010@postgrespro.ru https://postgr.es/m/CAMGcDxeqEpWj3fTXwqhSwBdXd2RS9jzwWscO-XbeCfso6ts3+Q@mail.gmail.com
Author: Justin Pryzby Discussion: https://postgr.es/m/X/Ff7jfnvJUab013@paquier.xyz
This patch essentially allows gram.y to implement a family of related syntax trees, rather than necessarily always parsing a list of SQL statements. raw_parser() gains a new argument, enum RawParseMode, to say what to do. As proof of concept, add a mode that just parses a TypeName without any other decoration, and use that to greatly simplify typeStringToTypeName(). In addition, invent a new SPI entry point SPI_prepare_extended() to allow SPI users (particularly plpgsql) to get at this new functionality. In hopes of making this the last variant of SPI_prepare(), set up its additional arguments as a struct rather than direct arguments, and promise that future additions to the struct can default to zero. SPI_prepare_cursor() and SPI_prepare_params() can perhaps go away at some point. Discussion: https://postgr.es/m/4165684.1607707277@sss.pgh.pa.us
Invent new RawParseModes that allow the core grammar to handle pl/pgsql expressions and assignments directly, and thereby get rid of a lot of hackery in pl/pgsql's parser. This moves a good deal of knowledge about pl/pgsql into the core code: notably, we have to invent a CoercionContext that matches pl/pgsql's (rather dubious) historical behavior for assignment coercions. That's getting away from the original idea of pl/pgsql as an arm's-length extension of the core, but really we crossed that bridge a long time ago. The main advantage of doing this is that we can now use the core parser to generate FieldStore and/or SubscriptingRef nodes to handle assignments to pl/pgsql variables that are records or arrays. That fixes a number of cases that had never been implemented in pl/pgsql assignment, such as nested records and array slicing, and it allows pl/pgsql assignment to support the datatype-specific subscripting behaviors introduced in commit c7aba7c. There are cosmetic benefits too: when a syntax error occurs in a pl/pgsql expression, the error report no longer includes the confusing "SELECT" keyword that used to get prefixed to the expression text. Also, there seem to be some small speed gains. Discussion: https://postgr.es/m/4165684.1607707277@sss.pgh.pa.us
In the wake of the previous commit, we don't really need this anymore, since array assignment is primarily handled by the core code. The only way that that code could still be reached is that a GET DIAGNOSTICS target variable could be an array element. But that doesn't seem like a particularly essential feature. I'd added it in commit 55caaae, but just because it was easy not because anyone had actually asked for it. Hence, revert that patch and then remove the now-unreachable stuff. (If we really had to, we could probably reimplement GET DIAGNOSTICS using the new assignment machinery; but the cost/benefit ratio looks very poor, and it'd likely be a bit slower.) Note that PLPGSQL_DTYPE_RECFIELD remains. It's possible that we could get rid of that too, but maintaining the existing behaviors for RECORD-type variables seems like it might be difficult. Since there's not any functional limitation in those code paths as there was in the ARRAYELEM code, I've not pursued the idea. Discussion: https://postgr.es/m/4165684.1607707277@sss.pgh.pa.us
Performance issues with the preceding patch to re-implement array element assignment within pl/pgsql led me to realize that the read/write parameter mechanism is misdesigned. Instead of requiring the assignment source expression to be such that *all* its references to the target variable could be passed as R/W, we really want to identify *one* reference to the target variable to be passed as R/W, allowing any other ones to be passed read/only as they would be by default. As long as the R/W reference is a direct argument to the top-level (hence last to be executed) function in the expression, there is no harm in R/O references being passed to other lower parts of the expression. Nor is there any use-case for more than one argument of the top-level function being R/W. Hence, rewrite that logic to identify one single Param that references the target variable, and make only that Param pass a read/write reference, not any other Params referencing the target variable. Discussion: https://postgr.es/m/4165684.1607707277@sss.pgh.pa.us
Cleanup for commit dee663f. Reported-by: Tomas Vondra <tomas.vondra@enterprisedb.com> Discussion: https://postgr.es/m/CA+hUKGLJ=84YT+NvhkEEDAuUtVHMfQ9i-N7k_o50JmQ6Rpj_OQ@mail.gmail.com
If the substring start index and length overflow when added together, substring() misbehaved, either throwing a bogus "negative substring length" error on a case that should succeed, or failing to complain that a negative length is negative (and instead returning the whole string, in most cases). Unsurprisingly, the text, bytea, and bit variants of the function all had this issue. Rearrange the logic to ensure that negative lengths are always rejected, and add an overflow check to handle the other case. Also install similar guards into detoast_attr_slice() (nee heap_tuple_untoast_attr_slice()), since it's far from clear that no other code paths leading to that function could pass it values that would overflow. Patch by myself and Pavel Stehule, per bug #16804 from Rafi Shamim. Back-patch to v11. While these bugs are old, the common/int.h infrastructure for overflow-detecting arithmetic didn't exist before commit 4d6ad31, and it doesn't seem like these misbehaviors are bad enough to justify developing a standalone fix for the older branches. Discussion: https://postgr.es/m/16804-f4eeeb6c11ba71d4@postgresql.org
We agreed to remove this terminology and use something more descriptive. Discussion: https://postgr.es/m/20200615182235.x7lch5n6kcjq4aue%40alap3.anarazel.de
In a cluster having used CREATE INDEX CONCURRENTLY while having enabled prepared transactions, queries that use the resulting index can silently fail to find rows. Fix this for future CREATE INDEX CONCURRENTLY by making it wait for prepared transactions like it waits for ordinary transactions. This expands the VirtualTransactionId structure domain to admit prepared transactions. It may be necessary to reindex to recover from past occurrences. Back-patch to 9.5 (all supported versions). Andrey Borodin, reviewed (in earlier versions) by Tom Lane and Michael Paquier. Discussion: https://postgr.es/m/2E712143-97F7-4890-B470-4A35142ABC82@yandex-team.ru
CREATE PUBLICATION has failed spuriously when applied to a permanent relation created or rewritten in the current transaction. Make the same change to another site having the same semantic intent; the second instance has no user-visible consequences. Back-patch to v13, where commit c6b9204 broke this. Kyotaro Horiguchi Discussion: https://postgr.es/m/20210113.160705.2225256954956139776.horikyota.ntt@gmail.com
Given a permanent relation rewritten in the current transaction, the old_snapshot_threshold mechanism assumed the relation had never been subject to early pruning. Hence, a query could fail to report "snapshot too old" when the rewrite followed an early truncation. ALTER TABLE SET TABLESPACE is probably the only rewrite mechanism capable of exposing this bug. REINDEX sets indcheckxmin, avoiding the problem. CLUSTER has zeroed page LSNs since before old_snapshot_threshold existed, so old_snapshot_threshold has never cooperated with it. ALTER TABLE ... SET DATA TYPE makes the table look empty to every past snapshot, which is strictly worse. Back-patch to v13, where commit c6b9204 broke this. Kyotaro Horiguchi and Noah Misch Discussion: https://postgr.es/m/20210113.160705.2225256954956139776.horikyota.ntt@gmail.com
The SQL standard allows a GRANTED BY clause on GRANT and REVOKE (privilege) statements that can specify CURRENT_USER or CURRENT_ROLE. In PostgreSQL, both of these are the default behavior. Since we already have all the parsing support for this for the GRANT (role) statement, we might as well add basic support for this for the privilege variant as well. This allows us to check off SQL feature T332. In the future, perhaps more interesting things could be done with this, too. Reviewed-by: Simon Riggs <simon@2ndquadrant.com> Discussion: https://www.postgresql.org/message-id/flat/f2feac44-b4c5-f38f-3699-2851d6a76dc9@2ndquadrant.com
The documentation as well as source code comments weren't entirely clear whether SELECT INTO was truly deprecated (thus in theory destined to be removed eventually), or just a less recommended variant. After discussion, it appears that other implementations also use SELECT INTO in direct SQL in a way similar to PostgreSQL, so it seems worth keeping it for compatibility. Update the language in the documentation to that effect. Discussion: https://www.postgresql.org/message-id/flat/96dc0df3-e13a-a85d-d045-d6e2c85218da%40enterprisedb.com
For those system catalogs that have a unique indexes, make a primary key and unique constraint, using ALTER TABLE ... PRIMARY KEY/UNIQUE USING INDEX. This can be helpful for GUI tools that look for a primary key, and it might in the future allow declaring foreign keys, for making schema diagrams. The constraint creation statements are automatically created by genbki.pl from DECLARE_UNIQUE_INDEX directives. To specify which one of the available unique indexes is the primary key, use the new directive DECLARE_UNIQUE_INDEX_PKEY instead. By convention, we usually make a catalog's OID column its primary key, if it has one. Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/dc5f44d9-5ec1-a596-0251-dadadcdede98@2ndquadrant.com
When to_tsquery() or websearch_to_tsquery() meet a complex morph containing
multiple words residing adjacent position, these words are connected
with OP_AND operator. That leads to surprising results. For instace,
both websearch_to_tsquery('"pg_class pg"') and to_tsquery('pg_class <-> pg')
produce '( pg & class ) <-> pg' tsquery. This tsquery requires
'pg' and 'class' words to reside on the same position and doesn't match
to to_tsvector('pg_class pg'). It appears to be ridiculous behavior, which
needs to be fixed.
This commit makes to_tsquery() or websearch_to_tsquery() connect words
residing adjacent position with OP_PHRASE. Therefore, now those words are
normally chained with other OP_PHRASE operator. The examples of above now
produces 'pg <-> class <-> pg' tsquery, which matches to
to_tsvector('pg_class pg').
Another effect of this commit is that complex morph word positions now need to
match the tsvector even if there is no surrounding OP_PHRASE. This behavior
change generally looks like an improvement but making this commit not
backpatchable.
Reported-by: Barry Pederson
Bug: #16592
Discussion: https://postgr.es/m/16592-70b110ff9731c07d@postgresql.org
Discussion: https://postgr.es/m/CAPpHfdv0EzVhf6CWfB1_TTZqXV_2Sn-jSY3zSd7ePH%3D-%2B1V2DQ%40mail.gmail.com
Author: Alexander Korotkov
Reviewed-by: Tom Lane, Neil Chen
The latestRemovedXid values used by nbtree deletion operations are determined by _bt_delitems_delete()'s caller, so there is no reason to pass a separate heapRel argument. Oversight in commit d168b66.
Subscripting for jsonb does not support slices, does not have a limit for the number of subscripts, and an assignment expects a replace value to have jsonb type. There is also one functional difference between assignment via subscripting and assignment via jsonb_set(). When an original jsonb container is NULL, the subscripting replaces it with an empty jsonb and proceeds with an assignment. For the sake of code reuse, we rearrange some parts of jsonb functionality to allow the usage of the same functions for jsonb_set and assign subscripting operation. The original idea belongs to Oleg Bartunov. Catversion is bumped. Discussion: https://postgr.es/m/CA%2Bq6zcV8qvGcDXurwwgUbwACV86Th7G80pnubg42e-p9gsSf%3Dg%40mail.gmail.com Discussion: https://postgr.es/m/CA%2Bq6zcX3mdxGCgdThzuySwH-ApyHHM-G4oB1R0fn0j2hZqqkLQ%40mail.gmail.com Discussion: https://postgr.es/m/CA%2Bq6zcVDuGBv%3DM0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w%40mail.gmail.com Discussion: https://postgr.es/m/CA%2Bq6zcVovR%2BXY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA%40mail.gmail.com Author: Dmitry Dolgov Reviewed-by: Tom Lane, Arthur Zakirov, Pavel Stehule, Dian M Fay Reviewed-by: Andrew Dunstan, Chapman Flack, Merlin Moncure, Peter Geoghegan Reviewed-by: Alvaro Herrera, Jim Nasby, Josh Berkus, Victor Wagner Reviewed-by: Aleksander Alekseev, Robert Haas, Oleg Bartunov
This commit introduces two new flags for jsonb assignment: * JB_PATH_FILL_GAPS: Appending array elements on the specified position, gaps are filled with nulls (similar to the JavaScript behavior). This mode also instructs to create the whole path in a jsonb object if some part of the path (more than just the last element) is not present. * JB_PATH_CONSISTENT_POSITION: Assigning keeps array positions consistent by preventing prepending of elements. Both flags are used only in jsonb subscripting assignment. Initially proposed by Nikita Glukhov based on polymorphic subscripting patch, but transformed into an independent change. Discussion: https://postgr.es/m/CA%2Bq6zcV8qvGcDXurwwgUbwACV86Th7G80pnubg42e-p9gsSf%3Dg%40mail.gmail.com Discussion: https://postgr.es/m/CA%2Bq6zcX3mdxGCgdThzuySwH-ApyHHM-G4oB1R0fn0j2hZqqkLQ%40mail.gmail.com Discussion: https://postgr.es/m/CA%2Bq6zcVDuGBv%3DM0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w%40mail.gmail.com Discussion: https://postgr.es/m/CA%2Bq6zcVovR%2BXY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA%40mail.gmail.com Author: Dmitry Dolgov Reviewed-by: Tom Lane, Arthur Zakirov, Pavel Stehule, Dian M Fay Reviewed-by: Andrew Dunstan, Chapman Flack, Merlin Moncure, Peter Geoghegan Reviewed-by: Alvaro Herrera, Jim Nasby, Josh Berkus, Victor Wagner Reviewed-by: Aleksander Alekseev, Robert Haas, Oleg Bartunov
During the jsonb subscripting assignment, the provided path might assume an object or an array where the source jsonb has a scalar value. Initial subscripting assignment logic will skip such an update operation with no message shown. This commit makes it throw an error to indicate this type of situation. Discussion: https://postgr.es/m/CA%2Bq6zcV8qvGcDXurwwgUbwACV86Th7G80pnubg42e-p9gsSf%3Dg%40mail.gmail.com Discussion: https://postgr.es/m/CA%2Bq6zcX3mdxGCgdThzuySwH-ApyHHM-G4oB1R0fn0j2hZqqkLQ%40mail.gmail.com Discussion: https://postgr.es/m/CA%2Bq6zcVDuGBv%3DM0FqBYX8DPebS3F_0KQ6OVFobGJPM507_SZ_w%40mail.gmail.com Discussion: https://postgr.es/m/CA%2Bq6zcVovR%2BXY4mfk-7oNk-rF91gH0PebnNfuUjuuDsyHjOcVA%40mail.gmail.com Author: Dmitry Dolgov Reviewed-by: Tom Lane, Arthur Zakirov, Pavel Stehule, Dian M Fay Reviewed-by: Andrew Dunstan, Chapman Flack, Merlin Moncure, Peter Geoghegan Reviewed-by: Alvaro Herrera, Jim Nasby, Josh Berkus, Victor Wagner Reviewed-by: Aleksander Alekseev, Robert Haas, Oleg Bartunov
On machines where sizeof(Datum) > sizeof(Oid) (that is, any 64-bit platform), the previous coding would compute a misaligned workspace->index pointer if nupper is odd. Architectures where misaligned access is a hard no-no would then fail. This appears to explain why thorntail is unhappy but other buildfarm members are not.
This is a replacement for the existing --with-openssl, extending the logic to make easier the addition of new SSL libraries. The grammar is chosen to be similar to --with-uuid, where multiple values can be chosen, with "openssl" as the only supported value for now. The original switch, --with-openssl, is kept for compatibility. Author: Daniel Gustafsson, Michael Paquier Reviewed-by: Jacob Champion Discussion: https://postgr.es/m/FAB21FC8-0F62-434F-AA78-6BD9336D630A@yesql.se
Current code allocates memory for JsonbValue, but it could be placed locally.
This adds the SQL standard feature that adds the SEARCH and CYCLE clauses to recursive queries to be able to do produce breadth- or depth-first search orders and detect cycles. These clauses can be rewritten into queries using existing syntax, and that is what this patch does in the rewriter. Reviewed-by: Vik Fearing <vik@postgresfriends.org> Reviewed-by: Pavel Stehule <pavel.stehule@gmail.com> Discussion: https://www.postgresql.org/message-id/flat/db80ceee-6f97-9b4a-8ee8-3ba0c58e5be2@2ndquadrant.com
It turns out that the calculation of [Merge]AppendPath.partitioned_rels in allpaths.c is faulty and sometimes omits relevant non-leaf partitions, allowing an assertion added by commit a929e17 to trigger. Rather than fix that, it seems better to get rid of those fields altogether. We don't really need the info until create_plan time, and calculating it once for the selected plan should be cheaper than calculating it for each append path we consider. As a first step, teach make_partition_pruneinfo to collect the relevant partitioned tables for itself. It's not hard to do so by traversing from child tables up to parents using the AppendRelInfo links. While here, make some minor stylistic improvements; mainly, don't use the "Relids" alias for bitmapsets that are not identities of any relation considered by the planner. Try to document the logic better, too. No backpatch, as there does not seem to be a live problem before a929e17. Also no new regression test; the code where the bug was will be gone at the end of this patch series, so it seems a bit pointless to memorialize the issue. Tom Lane and David Rowley, per reports from Andreas Seltenreich and Jaime Casanova. Discussion: https://postgr.es/m/87sg8tqhsl.fsf@aurora.ydns.eu Discussion: https://postgr.es/m/CAJKUy5gCXDSmFs2c=R+VGgn7FiYcLCsEFEuDNNLGfoha=pBE_g@mail.gmail.com
It turns out that the calculation of [Merge]AppendPath.partitioned_rels in allpaths.c is faulty and sometimes omits relevant non-leaf partitions, allowing an assertion added by commit a929e17 to trigger. Rather than fix that, it seems better to get rid of those fields altogether. We don't really need the info until create_plan time, and calculating it once for the selected plan should be cheaper than calculating it for each append path we consider. This patch undoes a couple of very minor uses of the partitioned_rels values. createplan.c was testing for nil-ness to optimize away the preparatory work for make_partition_pruneinfo(). That is worth doing if the check is nigh free, but it's not worth going to any great lengths to avoid. create_append_path() was testing for nil-ness as part of deciding how to set up ParamPathInfo for an AppendPath. I replaced that with a check for the appendrel's parent rel being partitioned. That's not quite the same thing but should cover most cases. If we note any interesting loss of optimizations, we can dumb this down to just always use the more expensive method when the parent is a baserel. Discussion: https://postgr.es/m/87sg8tqhsl.fsf@aurora.ydns.eu Discussion: https://postgr.es/m/CAJKUy5gCXDSmFs2c=R+VGgn7FiYcLCsEFEuDNNLGfoha=pBE_g@mail.gmail.com
It turns out that the calculation of [Merge]AppendPath.partitioned_rels in allpaths.c is faulty and sometimes omits relevant non-leaf partitions, allowing an assertion added by commit a929e17 to trigger. Rather than fix that, it seems better to get rid of those fields altogether. We don't really need the info until create_plan time, and calculating it once for the selected plan should be cheaper than calculating it for each append path we consider. The preceding two commits did away with all use of the partitioned_rels values; this commit just mechanically removes the fields and the code that calculated them. Discussion: https://postgr.es/m/87sg8tqhsl.fsf@aurora.ydns.eu Discussion: https://postgr.es/m/CAJKUy5gCXDSmFs2c=R+VGgn7FiYcLCsEFEuDNNLGfoha=pBE_g@mail.gmail.com
Writing unnecessary '.*' at start and end of a POSIX regex doesn't do much except confuse the reader about whether that might be necessary after all. Make the examples in table 9.16 a tad more realistic, and try to turn the next group of examples into something self-contained. Per gripe from rmzgrimes. Back-patch to v13 because it's easy. Discussion: https://postgr.es/m/161215841824.14653.8969016349304314299@wrigleys.postgresql.org
The initial tablesync done by logical replication used a query to fetch the information of a relation's columns that included atttypmod, but it was left unused. This was added by 7c4f524. Author: Euler Taveira Reviewed-by: Önder Kalacı, Amit Langote, Japin Li Discussion: https://postgr.es/m/CAHE3wggb715X+mK_DitLXF25B=jE6xyNCH4YOwM860JR7HarGQ@mail.gmail.com
The prototype calls the second argument of pgstat_progress_update_multi_param() "index", and some callers name their local variable that way. But when the surrounding code deals with index relations, this is confusing, and in at least one case shadowed another variable that is referring to an index relation. Adjust those call sites to have clearer local variable naming, similar to existing callers in indexcmds.c.
The ExecutorEnd hook is invoked in a context that could be quite long-lived, not the executor's own per-query context as I think we were sort of assuming. Thus, any cruft generated while producing the EXPLAIN output could accumulate over multiple queries. This can result in spectacular leakage if log_nested_statements is on, and even without that I'm surprised nobody complained before. To fix, just switch into the executor's context so that anything we allocate will be released when standard_ExecutorEnd frees the executor state. We might as well nuke the code's retail pfree of the explain output string, too; that's laughably inadequate to the need. Japin Li, per report from Jeff Janes. This bug is old, so back-patch to all supported branches. Discussion: https://postgr.es/m/CAMkU=1wCVtbeRn0s9gt12KwQ7PLXovbpM8eg25SYocKW3BT4hg@mail.gmail.com
This left gaps in the internal statement numbering, which is not terribly harmful (else we'd have noticed sooner), but it's not great either. Oversight in bbd5c20; backpatch to v12 where that came in. Pavel Stehule Discussion: https://postgr.es/m/CAFj8pRDXyQaJmpotNTQVc-t-WxdWZC35V2PnmwOaV1-taidFWA@mail.gmail.com
Not all OID-reference columns that can contain zeroes (indicating "no reference") were explicitly marked in catalogs.sgml. Fix that, and try to make the style a little more consistent while at it --- for example, consistently write "zero" not "0" for these cases. Joel Jacobson and Tom Lane Discussion: https://postgr.es/m/4ed9a372-7bf9-479a-926c-ae8e774717a8@www.fastmail.com
This follows in the spirit of commit dfb75e4, which created primary key and uniqueness constraints to improve the visibility of constraints imposed on the system catalogs. While our catalogs contain many foreign-key-like relationships, they don't quite follow SQL semantics, in that the convention for an omitted reference is to write zero not NULL. Plus, we have some cases in which there are arrays each of whose elements is supposed to be an FK reference; SQL has no way to model that. So we can't create actual foreign key constraints to describe the situation. Nonetheless, we can collect and use knowledge about these relationships. This patch therefore adds annotations to the catalog header files to declare foreign-key relationships. (The BKI_LOOKUP annotations cover simple cases, but we weren't previously distinguishing which such columns are allowed to contain zeroes; we also need new markings for multi-column FK references.) Then, Catalog.pm and genbki.pl are taught to collect this information into a table in a new generated header "system_fk_info.h". The only user of that at the moment is a new SQL function pg_get_catalog_foreign_keys(), which exposes the table to SQL. The oidjoins regression test is rewritten to use pg_get_catalog_foreign_keys() to find out which columns to check. Aside from removing the need for manual maintenance of that test script, this allows it to cover numerous relationships that were not checked by the old implementation based on findoidjoins. (As of this commit, 217 relationships are checked by the test, versus 181 before.) Discussion: https://postgr.es/m/3240355.1612129197@sss.pgh.pa.us
In the wake of commit 62f3409, we no longer need this tool. Discussion: https://postgr.es/m/3240355.1612129197@sss.pgh.pa.us
Generation expressions of generated columns are always inherited, so there is no need to set them separately in child tables, and there is no syntax to do so either. The code previously used the code paths for the handling of default values, for which different rules apply; in particular it might want to set a default value explicitly for an inherited column. This resulted in unrestorable dumps. For generated columns, just skip them in inherited tables. Reviewed-by: Tom Lane <tgl@sss.pgh.pa.us> Discussion: https://www.postgresql.org/message-id/flat/15830.1575468847%40sss.pgh.pa.us
Now that commit 62f3409 attached BKI_LOOKUP annotation to all the namespace and role OID columns in the catalogs, there's no real reason to have the magic PGNSP and PGUID symbols. Get rid of them in favor of implementing those lookups according to genbki.pl's normal pattern. This means that in the catalog headers, BKI_DEFAULT(PGNSP) becomes BKI_DEFAULT(pg_catalog), which seems a lot more transparent. BKI_DEFAULT(PGUID) becomes BKI_DEFAULT(POSTGRES), which is perhaps less so; but you can look into pg_authid.dat to discover that POSTGRES is the nonce name for the bootstrap superuser. This change also means that if we ever need cross-references in the initial catalog data to any of the other built-in roles besides POSTGRES, or to some other built-in schema besides pg_catalog, we can just do it. No catversion bump here, as there's no actual change in the contents of postgres.bki. Discussion: https://postgr.es/m/3240355.1612129197@sss.pgh.pa.us
The logic for converting the shell-glob-like syntax supported by utilities like psql and pg_dump to regular expression is extracted into a new function patternToSQLRegex. The existing function processSQLNamePattern now uses this function as a subroutine. patternToSQLRegex is a little more general than what is required by processSQLNamePattern. That function is only interested in patterns that can have up to 2 parts, a schema and a relation; but patternToSQLRegex can limit the maximum number of parts to between 1 and 3, so that patterns can look like either "database.schema.relation", "schema.relation", or "relation" depending on how it's invoked and what the user specifies. processSQLNamePattern only passes two buffers, so works exactly the same as before, always interpreting the pattern as either a "schema.relation" pattern or a "relation" pattern. But, future callers can use this function in other ways. Mark Dilger, reviewed by me. The larger patch series of which this is a part has also had review from Peter Geoghegan, Andres Freund, Álvaro Herrera, Michael Paquier, and Amul Sul, but I don't know whether any of them have reviewed this bit specifically. Discussion: http://postgr.es/m/12ED3DA8-25F0-4B68-937D-D907CFBF08E7@enterprisedb.com Discussion: http://postgr.es/m/5F743835-3399-419C-8324-2D424237E999@enterprisedb.com Discussion: http://postgr.es/m/70655DF3-33CE-4527-9A4D-DDEB582B6BA0@enterprisedb.com
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
No description provided.