One optimization you can make to a SQL table that is overly large is to change from nvarchar (or nchar) to varchar (or char). Doing so will cut the size used by the data in half, from 2 bytes per character (+ 2 bytes of overhead for varchar) to only 1 byte per character. However, you will lose the ability to store Unicode characters, such as those used by many non-English alphabets. If the tables are storing user-input, and your application is or might one day be used internationally, it’s likely that using Unicode for your characters is a good thing. However, if instead the data is being generated by your application itself or your development team (such as lookup data), and you can be certain that Unicode character sets are not required, then switching such columns to varchar/char can be an easy improvement to make. Avoid Premature OptimizationIf you are working with a lookup table that has a small number of rows, and is only ever referenced in the application by its numeric ID column, then you won’t see any benefit to using varchar vs. nvarchar. More generally, for small tables, you won’t see any significant benefit. Thus, if you have a general policy in place to use nvarchar/nchar because it offers more flexibility, do not take this post as a recommendation to go against this policy anywhere you can. You really only want to act on measurable evidence that suggests that using Unicode is resulting in a problem, and that you won’t lose anything by switching to varchar/char. Read more: Steve Smith