May 24, 2021
For dedicated SQL pool and according to the documentation, max number of columns per table is 1024 columns.
I don’t know the context of having 27000 columns in a dataset and whether it is sparse data or what and how many rows you have.
You can try one of the following options:
- Unpivot your data to convert it from a wide format to a long format if that’s doable
- Store your data in columnar format like ORC/Parquet and use Synapse Spark pool to process/query it. There is little documentation about max columns in ORC/Parquet so you need to give it a go. There could be other limitations from Java or Spark itself but no harm of trying such extreme cases.