Karolina's article explains the cons of changing the interval to 1 minute. Even if you have super powered servers, there is some static time DC RUM is configured to wait. For example, the CAS will wait for up to 30 seconds for certain data files to be produced. If those files aren't produced in time, your CAS has still waited 30 seconds out of 1 minute just for the files. Then, if it only takes 15 seconds to actually process the files, you'll be seeing new data at the 45th second of each minute. Although this can work, you're running a thin line and if the AMD takes over 15 seconds to package the data files in any given minute, you could actually fall behind your one minute processing.
I'm only aware of one customer who has successfully used 2 minute intervals, and their hardware requirements to move from 5 minute intervals to 2 minute intervals increased exponentially. What I didn't see mentioned outside of the article is that this also greatly increases the requirements for storage, baseline calculations, etc, and while it may work the first few hours/days, the storage and processing loads will continue to increase as you fill the storage period with more and more data, so you will not know the full effect of the change for at least a couple weeks.
I look after a customer running wtih 1-minute intervals.
It works, but it has been very difficult to get to that stage. It requires substantially more CAS servers and CAS server resources. This customer went from 6 CAS @ 5 minutes, to 30 CAS, all backed with 80GB of RAM, solid state disk, and many tweaks to config to get it working reliably. Even then processing delays creep up during peak periods when each interval take slightly over a minute to process. Customer is adding more CAS (another 15) to try and alleviate this.