Have locally rebuilt and installed 16-CURRENT kernel & world, as well as git and emacs-nox ports, which include dependencies perl and gnulib (part of m4). No issues noted, in particular the perl issues reported in bug #203900 ten years ago were not observed (perl now uses fdclose() instead of overwriting _file). gnulib as well as /bin/cat in the base system both access internal FILE members other than _file directly but appeared to work correctly.
The following simple test program below also worked as expected (failed on current stdio, succeeded on patched stdio, binary compiled against current stdio but run on patched stdio succeeded with fileno() reporting the actual file descriptor but fileno_unlocked() (which is implemented as a macro) reporting -1).
int main() {
int f;
for (int i=0; i<0x8000; i++)
if ((f=socket(PF_INET, SOCK_STREAM, 0))<0)
err(EX_OSFILE, "socket");
if (close(f)<0)
err(EX_OSFILE, "close");
FILE* const fp = fopen("/dev/null", "r");
if (!fp) err(EX_OSFILE, "fopen");
printf("fileno : %d\n", fileno(fp));
printf("fileno_unlocked: %d\n", fileno_unlocked(fp));
if (fclose(fp) == EOF)
err(EX_OSFILE, "fclose");
return 0;
}
Question: should /usr/src/lib/libc/tests/stdio/fopen_test.c and friends have a test case for >32,767 currently open sockets? Note this could fail for other reasons such as low resource limits on the build machine, sysctl set low, etc.
All ports should be built against this to see what other ports might break.