Skip to content

Switch internals to use faster connection #2210

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 3 commits into from
May 13, 2020
Merged

Conversation

brianc
Copy link
Owner

@brianc brianc commented May 12, 2020

This switches the internals to use faster protocol parsing & serializing. This results in a significant (30% - 50%) speed up in some common query patterns. There is quite a bit more performance work I need to do, but this takes care of some initial stuff & removes a big fork in the code which is making it harder to juggle other changes. The changes here are backwards compatible. The only tests I modified is 1 very old unit test for testing the parsing internals, which I have already ported to the new pg-protocol package and extended over there. So...should be good to go!

Anecdotally...I'm working on a feature at work where we are streaming tens of thousands of pretty large rows. I was running PG_FAST_CONNECTION=true & the query went from 12.5 seconds on average to 7 seconds on average.

This switches the internals to use faster protocol parsing & serializing.  This results in a significant (30% - 50%) speed up in some common query patterns.  There is quite a bit more performance work I need to do, but this takes care of some initial stuff & removes a big fork in the code.
@brianc brianc requested a review from charmander May 12, 2020 16:03
@brianc
Copy link
Owner Author

brianc commented May 12, 2020

something weird going on....the cursor tests in travis are really slow. The whole pg-cursor test suite finishes locally for me in 524 milliseconds, while in travis each individual test is taking 100-200 milliseconds...hmm...

@brianc

This comment has been minimized.

@abenhamdine
Copy link
Contributor

This switches the internals to use faster protocol parsing & serializing. This results in a significant (30% - 50%) speed up in some common query patterns. There is quite a bit more performance work I need to do, but this takes care of some initial stuff & removes a big fork in the code which is making it harder to juggle other changes. The changes here are backwards compatible. The only tests I modified is 1 very old unit test for testing the parsing internals, which I have already ported to the new pg-protocol package and extended over there. So...should be good to go!

Anecdotally...I'm working on a feature at work where we are streaming tens of thousands of pretty large rows. I was running PG_FAST_CONNECTION=true & the query went from 12.5 seconds on average to 7 seconds on average.

Just to say that I've enabled PG_FAST_CONNECTION=true in production since 3 weeks and... so far so good : no bug, and users noticed the application is more reactive. So a big thx ! 🎉

@brianc

This comment has been minimized.

@brianc brianc force-pushed the bmc/switch-to-fast-connection branch from 447924a to 08afb12 Compare May 12, 2020 21:32
@brianc
Copy link
Owner Author

brianc commented May 12, 2020

ah yes gotta .setNoDelay(true) - now travis is not timing out. 💭

@brianc
Copy link
Owner Author

brianc commented May 12, 2020

Hey @sehrope - do you think you could check what's up w/ the SCRAM test failure? I am still trying to get it to work locally...but something's up because it's failing in travis.

edit: got it figured out. super glad you just added those tests & I caught this! 👯

@brianc
Copy link
Owner Author

brianc commented May 12, 2020

I think I figured it out actually....concerned tests have never caught this issue (until now) but it looks like the scram parsing was off by 1 byte, but usually "just worked" due to the way buffers were coming in.

https://github.com/brianc/node-postgres/pull/2210/files#diff-1449b1bce007eeb37fb3f3077d2dd1cbL466

That should be length - 8. I think it worked in the past because the parser was slicing buffers based on the length of their packets...but anyways I'm gonna write some unit tests to cover this from the parser side as well.

@sehrope
Copy link
Contributor

sehrope commented May 12, 2020

Wowza. So was this a bug in something that changed or was the existing one impacted as well? Curious how the existing SCRAM tests are passing. Or is it a kind of double bug where the parser is broken but the rest of the scram code was compensating by slicing the extra piece?

@sehrope
Copy link
Contributor

sehrope commented May 12, 2020

edit: got it figured out. super glad you just added those tests & I caught this!

🚀

@brianc
Copy link
Owner Author

brianc commented May 12, 2020

Wowza. So was this a bug in something that changed or was the existing one impacted as well?

From what I can tell it's always been there - reading length - 4 is never correct as length is "length of this packet including the length byte itself" and is always 8 more than the length of the string, since the packet is Int32, Int32, string. As to why it showed up now...that's a bit of a mystery...I would have imagine your PR that added the integration tests for this would have caught it then. It does have to do w/ how many bytes come in on a single packet from the network so its possibly a timing related issue why it didn't show up before. Well, now we have integration tests which you added which shook out the timing issue, and I added some unit tests which will always fail on the old code so we're covered for regressions. I also have a hunch not a ton of people are using the scram authentication method yet as again since it was timing related I would have expected it to show up at some point. 🤷

@brianc brianc merged commit 5930e4f into master May 13, 2020
@brianc
Copy link
Owner Author

brianc commented May 13, 2020

thanks for the 👀 @charmander ❤️

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants