Jelenlegi hely


FYI: :$i30:$bitmap NTFS bug javítva - 2021, március 2 - 10:49

Windows 10 bug can corrupt drives with a single command, patched on latest Insider build

— XDA (@xdadevelopers) March 2, 2021

A hírek szerint a legfrissebb Windows 10 Insider "Dev" channel build-ben javították a fájlrendszerhibát okozó NTFS bugot. Részletek itt.

Kategóriák: Informatika

International PHP Conference Berlin 2021 - 2021, március 1 - 13:37
Kategóriák: Informatika

Érkezik az Android 11-alapú OxygenOS 11 a OnePlus Nord készülékekre - 2021, március 1 - 10:06

OnePlus starts rolling out OxygenOS 11 based on Android 11 to the OnePlus Nord

— XDA (@xdadevelopers) March 1, 2021


Kategóriák: Informatika

PostgreSQL Weekly News - February 28, 2021 - 2021, március 1 - 01:00
PostgreSQL Weekly News - February 28, 2021

Database Lab 2.2.1, a tool for fast cloning of large PostgreSQL databases to build non-production environments, released:

dbMigration .NET v13.4, a database migration and sync tool, released.

Joe 0.9.0, a Slack chatbot that helps backend developers and DBAs troubleshoot and optimize PostgreSQL queries, releaesed.

pgAdmin4 5.0, a web- and native GUI control center for PostgreSQL, released.

pgagroal 1.2.0, a high-performance protocol-native connection pool for PostgreSQL, released.

Person of the week:

PostgreSQL Product News PostgreSQL Jobs for February

PostgreSQL in the News

Planet PostgreSQL:

PostgreSQL Weekly News is brought to you this week by David Fetter

Submit news and announcements by Sunday at 3:00pm PST8PDT to

Applied Patches

Tom Lane pushed:

  • Fix invalid array access in trgm_regexp.c. Brown-paper-bag bug in 08c0d6ad6: I missed one place that needed to guard against RAINBOW arc colors. Remarkably, nothing noticed the invalid array access except buildfarm member thorntail. Thanks to Noah Misch for assistance with tracking this down.

  • Simplify memory management for regex DFAs a little. Coverity complained that functions in regexec.c might leak DFA storage. It's wrong, but this logic is confusing enough that it's not so surprising Coverity couldn't make sense of it. Rewrite in hopes of making it more legible to humans as well as machines.

  • Suppress compiler warning in new regex match-all detection code. gcc 10 is smart enough to notice that control could reach this "hasmatch[depth]" assignment with depth < 0, but not smart enough to know that that would require a badly broken NFA graph. Change the assert() to a plain runtime test to shut it up. Per report from Andres Freund. Discussion:

  • Allow complemented character class escapes within regex brackets. The complement-class escapes \D, \S, \W are now allowed within bracket expressions. There is no semantic difficulty with doing that, but the rather hokey macro-expansion-based implementation previously used here couldn't cope. Also, invent "word" as an allowed character class name, thus "\w" is now equivalent to "[[:word:]]" outside brackets, or "[:word:]" within brackets. POSIX allows such implementation-specific extensions, and the same name is used in e.g. bash. One surprising compatibility issue this raises is that constructs such as "[\w-_]" are now disallowed, as our documentation has always said they should be: character classes can't be endpoints of a range. Previously, because \w was just a macro for "[:alnum:]_", such a construct was read as "[[:alnum:]_-_]", so it was accepted so long as the character after "-" was numerically greater than or equal to "_". Some implementation cleanup along the way: * Remove the lexnest() hack, and in consequence clean up wordchrs() to not interact with the lexer. * Fix colorcomplement() to not be O(N^2) in the number of colors involved. * Get rid of useless-as-far-as-I-can-see calls of element() on single-character character element names in brackpart(). element() always maps these to the character itself, and things would be quite broken if it didn't --- should "[a]" match something different than "a" does? Besides, the shortcut path in brackpart() wasn't doing this anyway, making it even more inconsistent. Discussion: Discussion:

  • Change regex \D and \W shorthands to always match newlines. Newline is certainly not a digit, nor a word character, so it is sensible that it should match these complemented character classes. Previously, \D and \W acted that way by default, but in newline-sensitive mode ('n' or 'p' flag) they did not match newlines. This behavior was previously forced because explicit complemented character classes don't match newlines in newline-sensitive mode; but as of the previous commit that implementation constraint no longer exists. It seems useful to change this because the primary real-world use for newline-sensitive mode seems to be to match the default behavior of other regex engines such as Perl and Javascript ... and their default behavior is that these match newlines. The old behavior can be kept by writing an explicit complemented character class, i.e. [^[:digit:]] or [^[:word:]]. (This means that \D and \W are not exactly equivalent to those strings, but they weren't anyway.) Discussion:

  • Doc: remove src/backend/regex/re_syntax.n. We aren't publishing this file as documentation, and it's been much more haphazardly maintained than the real docs in func.sgml, so let's just drop it. I think the only reason I included it in commit 7bcc6d98f was that the Berkeley-era sources had had a man page in this directory. Discussion:

  • Fix list-manipulation bug in WITH RECURSIVE processing. makeDependencyGraphWalker and checkWellFormedRecursionWalker thought they could hold onto a pointer to a list's first cons cell while the list was modified by recursive calls. That was okay when the cons cell was actually separately palloc'd ... but since commit 1cff1b95a, it's quite unsafe, leading to core dumps or incorrect complaints of faulty WITH nesting. In the field this'd require at least a seven-deep WITH nest to cause an issue, but enabling DEBUG_LIST_MEMORY_USAGE allows the bug to be seen with lesser nesting depths. Per bug #16801 from Alexander Lakhin. Back-patch to v13. Michael Paquier and Tom Lane Discussion:

  • Improve memory management in regex compiler. The previous logic here created a separate pool of arcs for each state, so that the out-arcs of each state were physically stored within it. Perhaps this choice was driven by trying to not include a "from" pointer within each arc; but Spencer gave up on that idea long ago, and it's hard to see what the value is now. The approach turns out to be fairly disastrous in terms of memory consumption, though. In the first place, NFAs built by this engine seem to have about 4 arcs per state on average, with a majority having only one or two out-arcs. So pre-allocating 10 out-arcs for each state is already cause for a factor of two or more bloat. Worse, the NFA optimization phase moves arcs around with abandon. In a large NFA, some of the states will have hundreds of out-arcs, so towards the end of the optimization phase we have a significant number of states whose arc pools have room for hundreds of arcs each, even though only a few of those arcs are in use. We have seen real-world regexes in which this effect bloats the memory requirement by 25X or even more. Hence, get rid of the per-state arc pools in favor of a single arc pool for the whole NFA, with variable-sized allocation batches instead of always asking for 10 at a time. While we're at it, let's batch the allocations of state structs too, to further reduce the malloc traffic. This incidentally allows moveouts() to be optimized in a similar way to moveins(): when moving an arc to another state, it's now valid to just re-link the same arc struct into a different outchain, where before the code invariants required us to make a physically new arc and then free the old one. These changes reduce the regex compiler's typical space consumption for average-size regexes by about a factor of two, and much more for large or complicated regexes. In a large test set of real-world regexes, we formerly had half a dozen cases that failed with "regular expression too complex" due to exceeding the REG_MAX_COMPILE_SPACE limit (about 150MB); we would have had to raise that limit to something close to 400MB to make them work with the old code. Now, none of those cases need more than 13MB to compile. Furthermore, the test set is about 10% faster overall due to less malloc traffic. Discussion:

  • Doc: further clarify libpq's description of connection string URIs. Break the synopsis into named parts to make it less confusing. Make more than zero effort at applying SGML markup. Do a bit of copy-editing of nearby text. The synopsis revision is by Alvaro Herrera and Paul Förster, the rest is my fault. Back-patch to v10 where multi-host connection strings appeared. Discussion:

Thomas Munro pushed:

Michaël Paquier pushed:

Peter Eisentraut pushed:

Fujii Masao pushed:

Magnus Hagander pushed:

  • Fix docs build for website styles. Building the docs with STYLE=website referenced a stylesheet that long longer exists on the website, since we changed it to use versioned references. To make it less likely for this to happen again, point to a single stylesheet on the website which will in turn import the required one. That puts the process entirely within the scope of the website repository, so next time a version is switched that's the only place changes have to be made, making them less likely to be missed. Per (off-list) discussion with Peter Geoghegan and Jonathan Katz.

Álvaro Herrera pushed:

Amit Kapila pushed:

Peter Geoghegan pushed:

  • Use full 64-bit XIDs in deleted nbtree pages. Otherwise we risk "leaking" deleted pages by making them non-recyclable indefinitely. Commit 6655a729 did the same thing for deleted pages in GiST indexes. That work was used as a starting point here. Stop storing an XID indicating the oldest bpto.xact across all deleted though unrecycled pages in nbtree metapages. There is no longer any reason to care about that condition/the oldest XID. It only ever made sense when wraparound was something _bt_vacuum_needs_cleanup() had to consider. The btm_oldest_btpo_xact metapage field has been repurposed and renamed. It is now btm_last_cleanup_num_delpages, which is used to remember how many non-recycled deleted pages remain from the last VACUUM (in practice its value is usually the precise number of pages that were _newly deleted_ during the specific VACUUM operation that last set the field). The general idea behind storing btm_last_cleanup_num_delpages is to use it to give _some_ consideration to non-recycled deleted pages inside _bt_vacuum_needs_cleanup() -- though never too much. We only really need to avoid leaving a truly excessive number of deleted pages in an unrecycled state forever. We only do this to cover certain narrow cases where no other factor makes VACUUM do a full scan, and yet the index continues to grow (and so actually misses out on recycling existing deleted pages). These metapage changes result in a clear user-visible benefit: We no longer trigger full index scans during VACUUM operations solely due to the presence of only 1 or 2 known deleted (though unrecycled) blocks from a very large index. All that matters now is keeping the costs and benefits in balance over time. Fix an issue that has been around since commit 857f9c36, which added the "skip full scan of index" mechanism (i.e. the _bt_vacuum_needs_cleanup() logic). The accuracy of btm_last_cleanup_num_heap_tuples accidentally hinged upon when the source value gets stored. We now always store btm_last_cleanup_num_heap_tuples in btvacuumcleanup(). This fixes the issue because IndexVacuumInfo.num_heap_tuples (the source field) is expected to accurately indicate the state of the table _after_ the VACUUM completes inside btvacuumcleanup(). A backpatchable fix cannot easily be extracted from this commit. A targeted fix for the issue will follow in a later commit, though that won't happen today. I (pgeoghegan) have chosen to remove any mention of deleted pages in the documentation of the vacuum_cleanup_index_scale_factor GUC/param, since the presence of deleted (though unrecycled) pages is no longer of much concern to users. The vacuum_cleanup_index_scale_factor description in the docs now seems rather unclear in any case, and it should probably be rewritten in the near future. Perhaps some passing mention of page deletion will be added back at the same time. Bump XLOG_PAGE_MAGIC due to nbtree WAL records using full XIDs now. Author: Peter Geoghegan Reviewed-By: Masahiko Sawada <> Discussion:

  • VACUUM VERBOSE: Count "newly deleted" index pages. Teach VACUUM VERBOSE to report on pages deleted by the current VACUUM operation -- these are newly deleted pages. VACUUM VERBOSE continues to report on the total number of deleted pages in the entire index (no change there). The former is a subset of the latter. The distinction between each category of deleted index page only arises with index AMs where page deletion is supported and is decoupled from page recycling for performance reasons. This is follow-up work to commit e5d8a999, which made nbtree store 64-bit XIDs (not 32-bit XIDs) in pages at the point at which they're deleted. Note that the btm_last_cleanup_num_delpages metapage field added by that commit usually gets set to pages_newly_deleted. The exceptions (the scenarios in which they're not equal) all seem to be tricky cases for the implementation (of page deletion and recycling) in general. Author: Peter Geoghegan Discussion:

David Rowley pushed:

  • Add TID Range Scans to support efficient scanning ranges of TIDs. This adds a new executor node named TID Range Scan. The query planner will generate paths for TID Range scans when quals are discovered on base relations which search for ranges on the table's ctid column. These ranges may be open at either end. For example, WHERE ctid >= '(10,0)'; will return all tuples on page 10 and over. To support this, two new optional callback functions have been added to table AM. scan_set_tidrange is used to set the scan range to just the given range of TIDs. scan_getnextslot_tidrange fetches the next tuple in the given range. For AMs were scanning ranges of TIDs would not make sense, these functions can be set to NULL in the TableAmRoutine. The query planner won't generate TID Range Scan Paths in that case. Author: Edmund Horner, David Rowley Reviewed-by: David Rowley, Tomas Vondra, Tom Lane, Andres Freund, Zhihong Yu Discussion:

  • Add missing TidRangeScan readfunc. Mistakenly forgotten in bb437f995

Noah Misch pushed:

Pending Patches

Justin Pryzby sent in another revision of a patch to make INSERT SELECT use BulkInsertState and multi_insert, check for volatile defaults to ensure that any dependencies on them not be lost, make COPY flush the multi-insert buffer based on accumulated size of tuples, rather than line length, and check tuple size for a more accurate measure of chunk size when computing when to flush the buffer.

Hou Zhijie sent in another revision of a patch to add one GUC and one per-table option, both named enable_parallel_dml, to control whether DMLs include an option to execute in parallel.

Bharath Rupireddy sent in another revision of a patch to add GUCs both at the FDW level and at the foreign server level called keep_connections.

Masahiko Sawada sent in a patch to add a check whether or not to do index vacuum (and heap vacuum) based on whether or not 1% of all heap pages have an LP_DEAD line pointer.

Shenhao Wang sent in a patch to make --enable-coverage succeed without finding lcov, as the actual coverage tests can run without it.

Jim Mlodgenski sent in a patch to add a parser hook.

Mats Kindahl sent in a patch to a callback to TableAccessMethod that is called when the table should be scheduled for unlinking, and implements the method for the heap access method.

Justin Pryzby sent in three more revisions of a patch to report text parameters during errors in typinput, and exercise parameter output on error with binary parameters.

Daniel Gustafsson sent in two more revisions of a patch to make it possible to use NSS for libpq's TLS backend.

Jan Wieck sent in another revision of a patch to make the wire protocol pluggable and use same to answer via telnet.

Justin Pryzby sent in another revision of a patch to touch up the documentation for the upcoming release.

Iwata Aya and Álvaro Herrera traded patches to improve libpq tracing capabilities.

Amit Kapila sent in a patch to update the docs and comments for decoding of prepared xacts to match the current behavior.

Daniel Gustafsson sent in another revision of a patch to check the version of target cluster binaries in pg_upgrade.

Mark Rofail sent in another revision of a patch to implement foreign key arrays.

Matthias van de Meent sent in another revision of a patch to add progress-reported components for COPY progress reporting including a new view, pg_stat_progress_copy, add backlinks to progress reporting documentation, and add regression tests for same.

Dilip Kumar sent in three more revisions of a patch to provide a new interface to get the recovery pause status, pg_get_wal_replay_pause_state, that returns the actual status of the recovery pause i.e.'not paused' if pause is not requested, 'pause requested' if pause is requested but recovery is not yet paused and 'paused' if recovery is actually paused.

KaiGai Kohei sent in a patch to add binary input/output handlers to contrib/cube.

Georgios Kokolatos sent in another revision of a patch to make dbsize more consistent.

Mark Dilger sent in another revision of a patch to add pg_amcheck, a command line interface for running amcheck's verifications against tables and indexes.

John Naylor sent in two more revisions of a patch to make it possible to verify utf-8 using SIMD instructions.

Hayato Kuroda sent in three revisions of a patch to refactor ECPGconnect and allow IPv6 connections there.

Amit Langote, Greg Nancarrow, and Amit Kapila traded patches to make it possible to execute INSERT (INTO ... SELECT ...) with multiple workers.

Julien Rouhaud sent in another revision of a patch to add a new COLLATION option to REINDEX.

John Naylor sent in two revisions of a patch to allow inserting tuples into almost-empty pages.

Paul Martinez sent in two more revisions of a patch to document the effect of max_replication_slots on the subscriber side.

Ajin Cherian and Amit Kapila traded patches to avoid repeated decoding of prepared transactions after the restart, and add an option to enable two-phase commits in pg_create_logical_replication_slot.

Peter Eisentraut sent in another revision of a patch to fix use of cursor sensitivity terminology to match that in the SQL standard, removes the claim that sensitive cursors are supported, and adds a new option, ASENSITIVE, to cursors, that being the default behavior.

Benoit Lobréau sent in a patch to document in more detail how archive_command fails based on the signal it was sent, and whether it's reported in pg_stat_archiver.

Peter Eisentraut sent in another revision of a patch to set SNI for SSL connections from the client, which allows an SNI-aware proxy to route connections.

Peter Smith sent in three more revisions of a patch to implement logical decoding of two-phase transactions.

Amit Kapila sent in another revision of a patch to update documentation of logical replication to include the recently added logical replication configuration settings, and mention the fact that table synchronization workers are now using replication origins to track progress.

Thomas Munro sent in another revision of a patch to replace buffer I/O locks with condition variables.

Amit Langote sent in another revision of a patch to fix a misbehavior of partition row movement by ensuring that foreign key triggers are created on partitioned tables, and use same to enforce foreign keys correctly during cross-partition updates.

Thomas Munro sent in another revision of a patch to prevent latches from sending signals to processes that aren't currently sleeping, use SIGURG rather than SIGUSR1 for latches, use signalfd for epoll latches, which cuts down on system calls and other overheads by waiting on a signalfd instead of a signal handler and self-pipe, and use EVFILT_SIGNAL for kqueue latches.

Michaël Paquier sent in a patch to add a --tablespace option to reindexdb, matching the recently added capability for REINDEX.

Kota Miyake sent in a patch to fix pgbench's reporting of database name in errors when both PGUSER and PGPORT are set.

Amul Sul sent in another revision of a patch to implement wal prohibit state using a global barrier, error or Assert before START_CRIT_SECTION for WAL write, and document same.

Justin Pryzby sent in another revision of a patch to make it possible to use CREATE INDEX CONCURRENTLY on a partitioned table.

Jacob Champion sent in another revision of a patch to save the user's original authenticated identity for logging.

Daniel Gustafsson sent in another revision of a patch to disallow SSL compression by ignoring the option that would have turned it on. A later patch will remove the option entirely, now that it's deprecated.

Daniel Gustafsson sent in a patch to remove the defaults from libpq's authtype parameter, as it has been deprecated.

Álvaro Herrera sent in another revision of a patch to implement ALTER TABLE .. DETACH PARTITION CONCURRENTLY.

Dilip Kumar sent in two more revisions of a patch to make it possible to set the compression type for a table.

Euler Taveira de Oliveira sent in another revision of a patch to implement row filtering for logical replication using an optional WHERE clause in the DDL for PUBLICATIONs.

Thomas Munro sent in another revision of a patch to introduce symbolic names for FeBeWaitSet positions, and use FeBeWaitSet for walsender.c.

Thomas Munro sent in another revision of a patch to use condition variables for ProcSignalBarriers, allow condition variables to be used in interrupt code, and use a global barrier to fix DROP TABLESPACE on Windows by making it by force all backends to close all fds on that platform.

Andrey Borodin sent in a patch to use different compression methods for FPI.

Julien Rouhaud sent in a patch to change the explicit alignment use in pg_prewarm and pg_stat_statements to CACHELINEALIGN, and updates the alignment in hash_estimate_size() to an estimate of what ShmemInitHash will actually consume based on CACHELINEALIGN.

Thomas Munro sent in a patch to remove latch.c workaround for Linux < 2.6.27.

Peter Eisentraut sent in another revision of a patch to psql which makes it show all query results by default.

Jeff Janes sent in a patch to make SCRAM's behavior match MD5's by reporting in a DETAIL message when the password does not match for a user.

Joel Jacobson sent in a patch to implement a regexp_positions() function.

Paul Förster sent in a patch to mention database URIs in psql's --help output.

Justin Pryzby sent in a patch to refactor ATExec{En,Dis}ableRowSecurity in the style of ATExecForceNoForceRowSecurity, and do some further refactoring.

Justin Pryzby sent in a patch to implement ALTER TABLE SET TABLE ACCESS METHOD.

Kategóriák: Informatika

Kali Linux 2021.1 - 2021, február 28 - 10:33

Who is ready for the first Kali release of 2021?

Kali Linux 2021.1 is ready for download with DE updates, tool updates, more partnerships with tool authors, support for VMs on Apple Silicon, NetHunter updates, and much more!

— Kali Linux (@kalilinux) February 24, 2021

Elérhető a pentest és etikus hacker eszközök terjesztésére szakosodott Kali Linux legújabb kiadása.

Változások az előző kiadás óta:

Kategóriák: Informatika

Zotero 5.0.96 - 2021, február 27 - 17:31

Megjelent a Zotero hivatkozáskezelő legújabb változata:

A Zotero egy szabad, nyílt forráskódú hivatkozáskezelő szoftver. A program segítségével rendszerezhetjük és karbantarthatjuk hivatkozásainkat és kutatási feljegyzéseinket (beleértve a PDF fájlok tárolását). A szoftver kiemelkedő funkciói között megtalálható a böngészőbe épülés, az online szinkronizálás, a szövegközi- és lábjegyzet-hivatkozások, bibliográfiakészítés. Ennek megfelelően beépül az ismert szövegszerkesztőkbe, úgy mint a Microsoft Wordbe, LibreOffice-ba, Writerbe (ma: Apache OpenOffice) és NeoOffice-ba. A programot a Center for History and New Media of George Mason University (GMU) hozta létre.

A Zoterohoz több mint 9 ezer hivatkozási stílus érhető el.

Kategóriák: Informatika

FreeBSD 13.0-BETA4 - 2021, február 27 - 16:43

#FreeBSD 13.0-BETA4 Now Available. Help test what will be 13.0-RELEASE:

— FreeBSD RE Team (@FreeBSD_RE) February 27, 2021

Megjelent, tesztelhető a FreeBSD 13.0 negyedik bétája.

Kategóriák: Informatika

Megjelent a Mageia 8! - 2021, február 27 - 10:51

Mageia 8 released - #linux

— HUP (@huphu) February 27, 2021

Megjelent a Mageia nevű Linux disztribúció 8-as kiadása. Főbb alkotórészek:

Kategóriák: Informatika

[KV] Melyik oltást vennéd fel? - 2021, február 26 - 12:18

TFH csörög a telefon ...

Kategóriák: Informatika

PHPerKaigi 2021 - 2021, február 26 - 09:00
Kategóriák: Informatika

Bemutatkozik a Linux Mint Devuan Edition - 2021, február 26 - 08:09

#Linux Mint Devuan Edition -

— HUP (@huphu) February 26, 2021

Nem hivatalos és egyelőre kísérleti státuszú Linux Mint variáns, ami az systemd-mentes Devuan disztribúcióra épít. Részletek itt.

Kategóriák: Informatika

Beköszöntött a Feature Freeze a Hirsute Hippo kiadási ciklusában - 2021, február 26 - 07:59

Hirsute Hippo (to be 21.04) Feature Freeze -

— HUP (@huphu) February 26, 2021

Mi sem bizonyítja jobban, hogy hamarosan itt az Ubuntu 21.04 végleges kiadása, minthogy beköszöntött a Feature Freeze a Hirsute Hippo kiadási ciklusában. A roadmap szerint a végleges kiadás április 22-re várható.

Kategóriák: Informatika

GNOME 40 beta - 2021, február 25 - 18:42

GNOME 40 beta -

— HUP (@huphu) February 25, 2021

Elérhető tesztelésre a GNOME 40 bétája. A teszteléshez ISO lemezképfájl is elérhető. Részletek a bejelentésben.

Kategóriák: Informatika

A Red Hat kiterjesztette a no-cost RHEL ajánlatát a nyílt forrású szervezetekre - 2021, február 25 - 18:19

We're extending no-cost #RedHat Enterprise #Linux to #opensource organizations with the latest announcement of #RHEL for Open Source Infrastructure. Learn more about what is currently available:

— Red Hat, Inc. (@RedHat) February 25, 2021

Részletek a bejelentésben.

Kategóriák: Informatika

Budapest Hackerspace - Pentesterek és a Challenge24 programozóverseny világa - 2021, február 25 - 18:05

Szombat 18:30-tól ismét meetup, ezúttal a pentesterek és a Challenge24 programozóverseny világát próbáljuk megmutatni részletesebben az érdeklődőknek:

— H.A.C.K. Budapest (@hackerspacebp) February 25, 2021


Kategóriák: Informatika

Az Apple öt év után először ismét a Samsung elé került - 2021, február 25 - 08:58

Úgy látszik, sikeres lett az iPhone 12 szériája, az Apple ismét piacvezető. A Huawei nagyon lemaradt, nyilván az embargónak köszönhetően.

Azt azért mindenképpen érdemes figyelembe venni, hogy ez csak a 2020Q4 statisztika, természetesen ilyenkor előnyben van, aki ebben a negyedévben ad ki telefont. A Samsungnak pedig nem jelent meg most csúcskészüléke. A teljes évet nézve a Samsung az első, azonban csökkent a különbség.

Kategóriák: Informatika

pgagroal 1.2.0 - 2021, február 25 - 01:00

The pgagroal community is happy to announce version 1.2.0.

New features

  • Allow users connecting to pgagroal to have different passwords than passwords used for the PostgreSQL connections

Various enhancements and bug fixes.


pgagroal is a high-performance protocol-native connection pool for PostgreSQL.


  • High performance
  • Connection pool
  • Limit connections for users and databases
  • Prefill support
  • Remove idle connections
  • Perform connection validation
  • Enable / disable database access
  • Graceful / fast shutdown
  • Prometheus support
  • Remote management
  • Authentication query support
  • Failover support
  • Transport Layer Security (TLS) v1.2+ support
  • Daemon mode
  • User vault

Learn more on our web site or GitHub. Follow on Twitter.

pgagroal is released under the 3-clause BSD license, and is sponsored by Red Hat.

Kategóriák: Informatika

Database Lab Engine 2.2.0 and Joe Bot 0.9.0 - 2021, február 25 - 01:00
About Database Lab Engine

The Database Lab Engine (DLE) is an open-source experimentation platform for PostgreSQL databases. The DLE instantly creates full-size thin clones of your production database which you can use to:

  1. Test database migrations
  2. Optimize SQL queries
  3. Deploy full-size staging applications

The Database Lab Engine can generate thin clones for any size database, eliminating the hours (or days!) required to create “thick” database copies using conventional methods. Thin clones are independent, fully writable, and will behave identically to production: they will have the same data and will generate the same query plans.

Learn more about the Database Lab Engine and sign up for an account at

Database Lab Engine 2.2.0

Database Lab Engine (DLE) 2.2.0 further improves support for both types of PostgreSQL data directory initialization and synchronization: “physical” and “logical”. Particularly, for the “logical” type (which is useful for managed cloud PostgreSQL such as Amazon RDS users), it is now possible to setup multiple disks or disk arrays and automate data retrieval on a schedule. This gracefully cleans up the oldest versions of data, without downtime or interruptions in the lifecycle of clones.

Other improvements include:

  • Auto completion for the client CLI (“dblab”)
  • Clone container configuration — Docker parameters now can be defined in DLE config (such as --shm--size that is needed to avoid errors in newer versions of Postgres when parallel workers are used to process queries)
  • Allow requesting a clone with non-superuser access — This appears as a new option in the API and CLI called “restricted”

Database Lab Engine links:

Joe Bot 0.9.0 - A Virtual DBA for SQL Optimization

“Joe Bot”, a virtual DBA for SQL optimization, is a revolutionary new way to troubleshoot and optimize PostgreSQL query performance. Instead of running EXPLAIN or EXPLAIN (ANALYZE, BUFFERS) directly in production, users send queries for troubleshooting to Joe Bot. Joe Bot uses the Database Lab Engine (DLE) to:

  • Generate a fresh thin clone
  • Execute the query on the clone
  • Return the resulting execution plan to the user

The returned plan is identical to production in terms of structure and data volumes – this is achieved thanks to two factors:

  • thin clones have the same data and statistics as production (at a specified point in time), and
  • the PostgreSQL planner configuration on clones matches the production configuration.

Joe Bot users not only get reliable and risk-free information on how a query will be executed on production but also they can easily apply any changes to their own thin clones and see how query behavior is affected. For example, it is possible to add a new index and see if it actually helps to speed up the query.

One key aspect of Joe Bot, is the fact that users do not see the data directly, they only work with metadata. Therefore, teams without access to production data can be granted permissions to use this tool [1]

The main change in Joe Bot 0.9.0 is improved security: in past versions, DB superuser was used. Now a non-superuser is used for all requests. This makes it impossible to use plpythonu, COPY TO PROGRAM, FDW, or dblink to perform a massive copy of data outside infrastructructure which is not well protected by a strict firewall. All users are strongly recommended to upgrade as soon as possible.

Another major new feature is the production duration estimator, currently in an “experimental” state. This feature is intended to help users understand how long a specific operation - for example, an index creation operation - will actually take on the production database, which is likely to have a different physical infrastructure (for example a different filesystem, more RAM, and/or more CPU cores) than the thin clone running on the DLE. Read more: “Query duration difference between Database Lab and production environments”.

SQL Optimization Chatbot “Joe Bot” links:

[1] Although only metadata is returned from Joe Bot, it is possible to probe data for specific values using EXPLAIN ANALYZE. Please consult security experts in your organization before providing Joe Bot to people without production-level access.

Both Joe Bot and Database Lab Engine are distributed based on OSI-approved license (AGPLv3).

Your feedback is highly appreciated:

Kategóriák: Informatika

pgAdmin 4 v5.0 Released - 2021, február 25 - 01:00

The pgAdmin Development Team are pleased to announce pgAdmin 4 version 5.0. This release of pgAdmin 4 includes 31 bug fixes and new features. For more details please see the release notes.

pgAdmin is the leading Open Source graphical management tool for PostgreSQL. For more information, please see the website.

Notable changes in this release include:

  • New Desktop Runtime (Using NWjs):

    The Desktop Runtime is now based on NWjs which integrates a browser and the Python server creating a standalone application. By implementing it using NWjs we get rid of the separate server application and the independent browser. We also get rid of QT and C++ runtime logic.

    There are two minor known issues with this feature (6255 and 6258), both of which are due to bugs in NWjs itself; Users on macOS should use the application menu to exit pgAdmin, rather than quitting from the Dock icon to avoid the first issue. The second issue may cause Windows users to see a red square instead of the normal application icon in some circumstances.

  • Logical Replication support:

    Logical replication uses a publish and subscribe model with one or more subscribers subscribing to one or more publications on a publisher node. We have added support for logical replication by introducing new treeview nodes and dialogues with which users can easily create/alter/delete publications and subscriptions. Support is also included in the Schema Diff tool.

  • Quick Search functionality:

    Added a quick search option in the Help menu to search menu items and help articles. Type at least three characters to display all the matching possibilities under Menu items and the relevant documents under Help articles.

  • Make Statistics, Dependencies, Dependants tabs closable. Users can add them back using the 'Add panel' option on the context menu for the tab strip.

  • When running in Docker/Kubernetes, ensure logs are not stored in the container, and only sent to the console.
  • Use cheroot as the default production server for pgAdmin4
  • Updated Javascript dependencies to the latest versions
  • Fixed an issue where the focus is not properly set on the filter text editor after closing the error dialog.
  • Fixed an issue where the dependencies tab shows multiple owners for the objects having shared dependencies.
  • Fixed an issue where the Zoom to fit button in the ERD Tool only works if the diagram is larger than the canvas.
  • Fixed an issue where the user was unable to change the background color for a server.
  • Fixed an issue where external utility jobs (backup, maintenance etc.) are failing when the log level is set to DEBUG.
  • Ensure DEB/RPM packages depend on the same version of each other.
  • Fixed an autocomplete issue where it is not showing any suggestions if the schema name contains escape characters.

Builds for Windows and macOS are available now, along with a Python Wheel, Docker Container, RPM, DEB Package, and source code tarball from the tarball area.

Kategóriák: Informatika

[Videó] DevSecOps a gyakorlatban: security folyamatok skálázása és kihívásai - 2021, február 24 - 21:30

Ottucsák József (TrueMotion)

A DevSecOps a security válasza a modern fejlesztési technológiák és módszertanok által diktált felgyorsult tempóra. A security csapatoknak alkalmazkodnia kell az új igényekhez, a meglevő manuális munkafolyamatokat az automatizációt, illetve skálázhatóságot figyelembe véve újra kell gondolni. Az előadás során az elmúlt évben összegyűjtött tapasztalatainkat szeretnénk megosztani veletek arról, hogy mi hogyan építettük fel a security programunkat, milyen kihívásokkal kellett szembenéznünk és mi vár meg ránk a jövőben, illetve milyen változásokat hozott ez az új trend és milyen trükkökkel oldható meg a biztonság skálázhatósága a CI/CD pipeline-on belül és kívül.

Kategóriák: Informatika


Theme by me