Why does Oracle require TO_NCHAR when binding SQL_C_WCHAR text via ODBC
I use the following statement prepared and bound in ODBC:
SELECT (CASE profile WHEN ? THEN 1 ELSE 2 END) AS profile_order FROM engine_properties;
Executed in an ODBC 3.0 connection to an Oracle 10g database in AL32UTF8 charset, even after binding to a wchar_t string using SQLBindParameter(SQL_C_WCHAR), it still gives the error ORA-12704: character set mismatch.
Why? I'm binding as wchar. Shouldn't a wchar be considered an NCHAR?
If I change the parameter to wrap it with TO_NCHAR() then the query works without error. However since these queries are used for multiple database backends, I don't want to add TO_NCHAR just on Oracle text bindings. Is there something that I am missing? Another way to solve this without the TO_NCHAR hammer?
I haven't been able to find anything relevant via searches or in the manuals.
SELECT (CASE profile WHEN '_default' THEN 1 ELSE 2 END) AS profile_order FROM engine_properties;
SELECT (CASE profile WHEN TO_NCHAR('_default') THEN 1 ELSE 2 END) AS profile_order FROM engine_properties;
SQL> describe engine_properties; Name Null? Type ----------------------------------------- -------- ---------------------------- EID NOT NULL NVARCHAR2(22) LID NOT NULL NUMBER(11) PROFILE NOT NULL NVARCHAR2(32) PKEY NOT NULL NVARCHAR2(50) VALUE NOT NULL NVARCHAR2(64) READONLY NOT NULL NUMBER(5)
This version without TO_NCHAR works fine in SQL Server and PostgreSQL (via ODBC) and SQLite (direct). However in Oracle it returns "ORA-12704: character set mismatch".
SQLPrepare(SELECT (CASE profile WHEN ? THEN 1 ELSE 2 END) AS profile_order FROM engine_properties;) = SQL_SUCCESS SQLBindParameter(hstmt, 1, SQL_PARAM_INPUT, SQL_C_WCHAR, SQL_VARCHAR, 32, 0, "_default", 18, 16) = SQL_SUCCESS SQLExecute() = SQL_ERROR SQLGetDiagRec(1) = SQL_SUCCESS [SQLSTATE: HY000, NATIVE: 12704, MESSAGE: [Oracle][ODBC] [Ora]ORA-12704: character set mismatch] SQLGetDiagRec(2) = SQL_NO_DATA
If I do use TO_NCHAR, it's okay (but won't work in SQL Server, Postgres, SQLite, etc).
SQLPrepare(SELECT (CASE profile WHEN TO_NCHAR(?) THEN 1 ELSE 2 END) AS profile_order FROM engine_properties;) = SQL_SUCCESS SQLBindParameter(hstmt, 1, SQL_PARAM_INPUT, SQL_C_WCHAR, SQL_VARCHAR, 32, 0, "_default", 18, 16) = SQL_SUCCESS SQLExecute() = SQL_SUCCESS SQLNumResultCols() = SQL_SUCCESS (count = 1) SQLFetch() = SQL_SUCCESS
If the Oracle database character set is AL32UTF8, why are the columns defined as NVARCHAR2? That means that you want those columns encoded using the national character set (normally AL16UTF16, but that may be different on your database). Unless you are primarily storing Asian language data (or other data that requires 3 bytes of storage in AL32UTF8), it is relatively uncommon to create NVARCHAR2 columns in an Oracle database when the database character set supports Unicode.
In general, you are far better served sticking with the database character set (CHAR and VARCHAR2 columns) rather than trying to work with the national character set (NCHAR and NVARCHAR2 columns) because there are far fewer hoops that need to be jumped through on the development/ configuration side of things. Since you aren't increasing the set of characters you can encode by choosing NVARCHAR2 data types, I'll wager that you'd be happier with VARCHAR2 data types.
I can't say that I understand exactly how to choose between VARCHAR2 and NVARCHAR2 still. I had tried using VARCHAR2 for my date (which does include a lot of different languages, both European and Asian) and it didn't work that time.
I have had another bit of playing around again though and I found that using Justin's suggestion works in this combination:
- AL32UTF8 database charset
- VARCHAR2 column types
- set NLS_LANG=.UTF8 before starting sqlplus.exe
- data files using UTF-8 (i.e. the files with all the INSERT statements)
- inserting and extracting strings from the database using SQL_C_WCHAR
I still don't find Oracle as fun to play with as (for instance) PostgreSQL though... :-)