The logic to free the buffer after the gss_init_sec_context() call was
always a bit wonky. Because gss_init_sec_context() sets the GSS context
variable, conn->gctx, we would in fact always attempt to free the buffer.
That only works, because previously conn->ginbuf.value was initialized to
NULL, and free(NULL) is a no-op. Commit
61bf96cab0 refactored things so
that the GSS input token buffer is allocated locally in pg_GSS_continue,
and not held in the PGconn object. After that, the now-local ginbuf.value
variable isn't initialized when it's not used, so we pass a bogus pointer
to free().
To fix, only try to free the input buffer if we allocated it. That was the
intention, certainly after the refactoring, and probably even before that.
But because there's no live bug before the refactoring, I refrained from
backpatching this.
The bug was also independently reported by Graham Dutton, as bug #14690.
Patch reviewed by Michael Paquier.
Discussion: https://www.postgresql.org/message-id/
6288d80e-a0bf-d4d3-4e12-
7b79c77f1771%40iki.fi
Discussion: https://www.postgresql.org/message-id/
20170605130954.1438.90535%40wrigleys.postgresql.org
return STATUS_ERROR;
}
}
+ else
+ {
+ ginbuf.length = 0;
+ ginbuf.value = NULL;
+ }
maj_stat = gss_init_sec_context(&min_stat,
GSS_C_NO_CREDENTIAL,
GSS_C_MUTUAL_FLAG,
0,
GSS_C_NO_CHANNEL_BINDINGS,
- (conn->gctx == GSS_C_NO_CONTEXT) ? GSS_C_NO_BUFFER : &ginbuf,
+ (ginbuf.value == NULL) ? GSS_C_NO_BUFFER : &ginbuf,
NULL,
&goutbuf,
NULL,
NULL);
- if (conn->gctx != GSS_C_NO_CONTEXT)
+ if (ginbuf.value)
free(ginbuf.value);
if (goutbuf.length != 0)