In October 2022, an attacker drained $116 million from Mango Markets in about half an hour. The mechanics involved manipulating a low-liquidity token's price on Mango's own oracle, posting the inflated position as collateral, and borrowing heavily against it before the price normalized. The oracle read was valid — the price feed was fresh. The price was just engineered.
This gets cited as an oracle problem, which is accurate. What gets less attention is why Mango's oracle was vulnerable in that particular way, and why teams porting lending logic from Ethereum tend to get oracle handling wrong on Solana even when they know to be careful.
Pyth price feeds are regular accounts. They have to be passed into your instruction as a parameter, and if the account is stale, you read the stale price. The protocol validates staleness by checking price.published_at against Clock::get().unix_timestamp. It should also check the confidence interval — Pyth widens this when price discovery is uncertain — but Chainlink's EVM interface doesn't surface confidence intervals in the same form, so teams that have spent years writing EVM lending code just don't have that check in their muscle memory. The gap isn't carelessness. It's that the failure mode doesn't exist in the environment they came from.
State Doesn't Live Where You Think It Does
A Solana program is stateless bytecode. All mutable state lives in separate accounts passed explicitly to every instruction. There's no storage keyword. There's no address(this). Everything your protocol needs to read or write — market config, vault balances, user positions, fee state — is a separate account with its own address and byte layout, and every instruction that touches it must declare it upfront.
EVM contracts own their state. It's co-located with the code, implicitly, with no setup. You declare a mapping(address => UserPosition) and write to it. The storage lives at the contract address by default.
None of that exists on Solana, and that gap is where EVM developers spend their first few weeks confused in ways that are hard to articulate. The confusion doesn't feel like "I don't understand this" — it feels like "this keeps not working and I don't know why."
A deposit instruction in a Solana lending protocol passes 12 to 15 accounts. Market config, asset reserve, user obligation, liquidity vault, collateral vault, fee vault, oracle, user token accounts, system program, token program, rent sysvar. The verbosity isn't overhead. Those accounts are the state. The instruction routes through them, it doesn't own them.
PDAs are how this becomes workable at scale. A Program Derived Address is derived deterministically from seeds and a program ID, and it lands off the Ed25519 curve — meaning no private key can exist for it. The owning program signs for it via invoke_signed by supplying the original seeds; the runtime re-derives the address to verify. Vaults, user positions, market accounts — all of this can be created and controlled without the program holding a key.
One thing that's genuinely under-documented: the canonical bump (the highest nonce that produces a valid off-curve address) should always be stored in account data and passed in, not recomputed on-chain. find_program_address iterates from 255 downward, spending roughly 1,500 compute units per attempt. This doesn't sound like much until you're calling it on every deposit and realize you're burning 15,000+ CU before any business logic runs. The Solana compute budget is 200,000 CU per transaction by default. 15k is not nothing.
The Attack Surface Doesn't Shrink. It Moves.
Re-entrancy doesn't exist on Solana. The SBF VM returns ReentrancyNotAllowed if program A calls program B and B tries to call A. This is runtime enforcement. You can't write a re-entrancy bug because the runtime won't execute it.
What you get instead is a different set of vulnerabilities that EVM developers consistently underestimate, because the intuitions don't transfer.
Account data is untyped bytes. The runtime checks ownership — only the owner program can write to an account — but it doesn't check type. If your program deserializes an account without verifying account.owner == your_program_id, an attacker passes a crafted account owned by a program they control with a byte layout that happens to deserialize as your struct. The program accepts it. This is probably the most common exploitable vulnerability in audited Solana programs, and it keeps appearing because EVM developers have no equivalent to draw on.
Related, and weirder: Config { admin: Pubkey, fee: u64 } and User { authority: Pubkey, balance: u64 } are both 40 bytes with the same field layout. Without a type discriminator, a program expecting a Config account will silently accept a User account. Anchor prepends an 8-byte discriminator to every account and validates it before deserialization.
The signer check failure is more embarrassing than it sounds. account.key == admin_pubkey without account.is_signer accepts any transaction that includes the admin's pubkey as an unsigned account reference. The Wormhole exploit involved something structurally similar — a guardian signature verification function was substituted for a no-op. $320M in bridged assets was controlled by a validator set that was never actually asked to sign.
What the Account Model Does to Protocol Design
The fixed-size array problem doesn't get talked about enough.
Account data is fixed at creation. realloc lets you grow an account up to 10 KiB per instruction — but that's a ceiling per instruction, not per operation. MarginFi's LendingAccount supports up to 16 positions. Kamino's obligation structure has a similar cap. When a user hits the limit, they can't open a new position without closing one.
Adding a new field to an existing account type means migrating every existing account. You can do it upfront (one signed transaction per account, which scales terribly), lazily on first interaction (returning users absorb the cost), or by pre-allocating slack space at creation (everyone pays rent on bytes that might never be used).
The rent calculation is (data_length + 128) × 3480 × 2 lamports for rent exemption — roughly 0.002 SOL for a 165-byte token account, more for larger ones. Before a user can receive any token, they need an associated token account for that specific mint. If they don't have one, the transfer fails.
This produces a failure mode in reward distributions: protocol ships a new reward token, runs the distribution, transactions fail for everyone who hasn't previously held that token. The protocol then has to create ATAs from treasury SOL, exclude those users, or delay.
Flash Loans, Briefly
Solana flash loans work through instruction introspection. The lending program reads sysvar::instructions — a sysvar that exposes the full current transaction's instruction set — and validates that a repay instruction appears later before releasing funds. If the repay fails, everything reverts at the transaction level.
The transaction construction is different from EVM: all accounts for all intermediate steps have to be known and declared before submission. You can't discover a pool's address at runtime and route through it. Jupiter runs its routing engine off-chain not as an optimization but because Solana's account declaration requirement makes on-chain dynamic routing architecturally impossible.
Every account touched must be declared before the transaction is signed. Complex multi-hop routes with oracle accounts and tick arrays at each hop run into the 1,232-byte transaction size limit. Each account address is 32 bytes. v0 transactions with Address Lookup Tables push this boundary out somewhat, but they don't eliminate it.
The oracle staleness issue from Mango is the one I think about most. It's where the mental model from Ethereum is most subtly wrong and the gap is hardest to notice in code review. Teams checking staleness but not the confidence interval will pass an audit, deploy to mainnet, and still be vulnerable to the exact attack vector that cost Mango $116M. The confidence interval check is the thing most teams still don't do.