Typical software development environments have several systems running the software. While usually each developer runs it on his machine, there is a continuous integration server running and versions of the software that succeed there go to an artifactory and are immediately, during the night or manually installed on a system that is called „staging“, „development“, „dev“, „dev-test“ or something like that. Then it goes to a system called „test“, where the test team can work with it. They only get versions that are actually of interest to them, so it makes sense to separate it from „dev“. When successful, it might go to production, but usually there is just another system, with whatever name, that is supposed to be identical to production and is used for final release tests, before the software actually goes to production.
This kind of works, as long as the software is done by one team. Now we observe cases, where many teams work on the software. They develop their part of the software and kind of need a stable version of everything else. This is especially true for external remote teams.
Now we live in a time where virtual servers are the normal way to work. So getting a new set of servers is no longer an issue of buying hardware, but it is just a matter of running a few scripts. Good organizations can set up whole systems automatically in a matter of minutes. So it should be possible, to just provide within reason a few test systems more when needed and discard them again when done. Abusing an existing system for different purposes rarely works out smoothly.
So it is a good idea to select a technology that allows to setup a system or a whole landscape automatically. This can be virtual servers, docker containers or even physical hardware. It is a lot of work to set this up, but then it becomes easier to add just one more test system to run special tests or to have a stable release to develop against for other teams.