While true that the x nm nomenclature doesn’t match physical feature size anymore, it’s definitely not just marketing bs. The process nodes are very significant and very (very) challenging technological steps, that yield power efficiency gains in the dozens of % usually.
I’m not so sure we’re missing that much personally, I think it’s more just sheer scale, as well as the complexity of the input and output connections (I guess unlike machine learning networks, living things tend to have a much more ‘fuzzy’ sense of inputs and outputs). And of course sheer computational speed; our virtual networks are basically at a standstill compared to the paralellism of a real brain.
Just my thoughts though!