Production ASIC technology nodes range from 16nm up to 600nm, and the 10nm and 7nm nodes are nearing production status. With each decreasing technology node, in rough terms, the NRE doubles, the logic density doubles, and the wafer cost increases by ~25%. Going to a more advanced note can result in a cost savings as long as the volume compensates for the increased NREs.
There are two items that work against migrating to a more advanced node. The first is the anticipated production volume. Economically speaking, this can be phrased as how many production parts does it take before the unit cost savings equals the increase in NRE, and how long will it take to reach the breakeven point?
How long it takes to reach that breakeven point is important. A node selection that breaks even in 5 years is not economical. A project that breaks even in a few months is a no-brainer. Typically, an ASIC node selection needs to breakeven in under a year, with a 6 to 9 month period being ideal.
A second point to consider is that an increase in logic density at a given node does not always result in a lower cost die. A die consists of core logic that is surrounded by a pad ring that consists of the input / output buffers, the power bussing, and the scribe line (the space required to allow the die to be cut from the wafer). The I/O buffers have a minimum size that is necessary to withstand ESD damage. The pads have a minimum size because of assembly constraints. And together, this produces a pad ring that does not change size with differing technologies.
Consider the case of a 256 pin circuit with a 50u pad pitch. This die will be a minimum of 3.5 mm / side, and have a core area of 6.25mm2. This tables show how many gates can be put into that 6.25 mm2 space. So if a design has less than 400,000 gates, and if the 180 nm node will support the speed requirements, then there is no reason to use a smaller technology node.