Skip to content

MySQLnd: Support cursors in store/get result #6518

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 3 commits into from

Conversation

nikic
Copy link
Member

@nikic nikic commented Dec 16, 2020

I was initially pursuing a different solution here, but then realized that for store_result and get_result, we can simply issue a COM_FETCH with the maximum number of rows (-1) upfront, and then store the result set as usual.

This approach is very simple (we can probably even apply it to 7.4) and is closer to what we actually want for store/get result, which is to ignore the cursor completely.

use_result still performs a row-by-row fetch when a cursor is involved. This is really inefficient, but a different issue.

@nikic
Copy link
Member Author

nikic commented Dec 16, 2020

cc @kamil-tekiela It looks like supporting get_results + PS cursor is much simpler than expected. What do you think about this?

@nikic
Copy link
Member Author

nikic commented Dec 17, 2020

Okay, there's two parts to this now...

  1. Performing a COM_FETCH with max_rows for store_result (and thus get_result). Apparently this is also what libmysqlclient does: https://github.com/mysql/mysql-server/blob/ee4455a33b10f1b1886044322e4893f587b319ed/libmysql/libmysql.cc#L4253
  2. Calling SP that uses cursor without requesting cursor from client. It seems like just ignoring the SERVER_STATUS_CURSOR_EXISTS flag if CURSOR_TYPE_READ_ONLY is not set works. Looking around a bit more, https://jira.mariadb.org/browse/CONC-424 indicates that this is a server bug present in both MySQL and MariaDB, and MariaDB addressed this issue in the connector in mariadb-corporation/mariadb-connector-c@d09ac51 in a similar way to what I'm doing here. However, I didn't find similar logic in the MySQL connector. There is https://github.com/mysql/mysql-server/blob/ee4455a33b10f1b1886044322e4893f587b319ed/libmysql/libmysql.cc#L1994-L1996, but that only applies in CLIENT_DEPRECATE_EOF mode...

@kamil-tekiela
Copy link
Member

kamil-tekiela commented Dec 17, 2020

Ok, I'm sold. It seems to work for me but I would also add bind_result/fetch to the test case also to cover the 2 duplicate bugs people raised for packets out of order errors.

It really seems like some unfinished feature or a bug on MySQL side. I can't find any real use for cursors being used from SP in PHP. It just makes no sense to me.

In terms of cursors created by MYSQLI_STMT_ATTR_CURSOR_TYPE I feel like it should probably be better documented. If it's possible to use it with get_result() then the documentation should clearly mention that this is pointless.
I still do not know the purpose of MYSQLI_STMT_ATTR_PREFETCH_ROWS. Is this related to cursors? Does it work, or is it just a dummy option like MYSQLI_CURSOR_TYPE_SCROLLABLE/MYSQLI_CURSOR_TYPE_FOR_UPDATE? The test case says it is currently not implemented...

I have also tested it with SP with cursors and multiple result sets. It seems to work, too.

@nikic
Copy link
Member Author

nikic commented Dec 17, 2020

I still do not know the purpose of MYSQLI_STMT_ATTR_PREFETCH_ROWS. Is this related to cursors? Does it work, or is it just a dummy option like MYSQLI_CURSOR_TYPE_SCROLLABLE/MYSQLI_CURSOR_TYPE_FOR_UPDATE? The test case says it is currently not implemented...

I can answer this one at least ... this is the number of rows that are fetched from the cursor at a time. However, mysqlnd ignores this option and always fetches 1 row at a time (which is very inefficient...)

@kamil-tekiela
Copy link
Member

@php-pulls php-pulls closed this in bc16684 Dec 18, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants