For instance, you can work out someone's location via the cellular network the phone is registered on. However, this can be inexact, devices can be switched off and you'd have to show someone was carrying the phone as well.
If instead investigators can get into phones and find pictures, maybe even selfies, with accurate time and location stamps in the metadata, they've struck pay dirt.
There are log files from the device operating system that can reveal sensitive information even if incriminating user data is deleted. Getting into a smartphone usually provides access to logged-in email accounts, apps and the users' contacts.
Most smartphone vendors understand that they need to protect users' sensitive and valuable personal information. They make unauthorised device access as difficult as possible.
There's strong encryption of data, special security chips and auto-wiping of devices after too many failed attempts to enter pass codes; if smartphone users don't hand over the pass codes, it's almost impossible to break into the devices.
Apple especially has taken a hard line device security and user privacy to the point that the company has locked itself out of iDevices. This has led to some very public legal fights with law enforcement who demand that Apple hack into iPhones and iPads - and Apple firing back that it's impossible to do that, which is great marketing of course.
The best laid plans of hardware and software developer often go awry thanks to hardware and software bugs in smartphones however. Take one good bug, or chain several minor ones together, and code up an exploit to attack the device with: bang, you're in and can carry on with forensic investigations.
This is where things get murky ethically though. Bugs that allow unauthorised access to devices are dangerous in the wrong hands which is why tech companies who care about their users try to fix them as soon as possible.
If you find a bug like that, the right thing to do is to report it to the company responsible for the product, following a responsible disclosure process that allows time to figure out and to deploy a fix.
Bugs fixed are bugs burnt and an avenue of access to smartphone data closed. Instead of disclosing new "zero-days" some companies collect them and use them in their forensic tools, and keep very, very quiet about it.
It's big business too. Brokers pay large sums of money for new vulnerabilities, tens and hundreds of thousand dollars a pop, as long as the bug can be reproduced and shown to work.
Expensive as they are, the bugs usually have short shelf life before they're discovered and patched.
Earlier this year, Apple worked out how a forensics outfit called GrayShift used its GrayKey box to get crack open some locked iPhones via the Lightning data port. GrayShift kept a tight lid on GrayKey, and security researchers tried hard to figure out how the "crack box" worked from what little information was available.
Apple eventually managed to close it off with an update that shuts down data access via the Lightning/USB port if the iPhone has not been woken up for a while or ran out of battery.
Some forensics vendors suggested police kept seized phones on to avoid activating the port shut-down, and charged all the time on the way to the lab in a special transport bag with batteries and cables, as a workaround for Apple's anti-GrayKey fix.
On the one hand, forensic tools that use vulnerabilities to crack open locked smartphones can help solve awful crimes that would otherwise lay unresolved. There's no way to ensure that the same vulnerabilities won't leak and be used by criminals, something that's already happened many times.
They could also be sold to repressive regimes and cost lives.
Using exploits for forensics work is a dangerous, two-edged sword in other words that needs meticulous oversight and some careful thought as to whether or not it's warranted.