name
stringlengths
5
231
severity
stringclasses
3 values
description
stringlengths
107
68.2k
recommendation
stringlengths
12
8.75k
impact
stringlengths
3
11.2k
function
stringlengths
15
64.6k
On liquidation, if netPnLE36 <= 0, the premium paid by the liquidator is locked in the contract.
high
When liquidating a position, the liquidator is required to pay premium to Lender, which is accumulated in sharingProfitTokenAmts together with Lender's profit and paid to Lender in `_shareProfitsAndRepayAllDebts()`.\\n```\\n (\\n netPnLE36,\\n lenderProfitUSDValueE36,\\n borrowTotalUSDValueE36,\\n positionOpenUSDValueE36,\\n sharingProfitTokenAmts ) = calcProfitInfo(_positionManager, _user, _posId);\\n // 2. add liquidation premium to the shared profit amounts\\n uint lenderLiquidatationPremiumBPS = IConfig(config).lenderLiquidatePremiumBPS();\\n for (uint i; i < sharingProfitTokenAmts.length; ) {\\n sharingProfitTokenAmts[i] +=\\n (pos.openTokenInfos[i].borrowAmt * lenderLiquidatationPremiumBPS) / BPS;\\n unchecked {\\n ++i;\\n }\\n }\\n```\\n\\nHowever, if netPnLE36 <= 0, `_shareProfitsAndRepayAllDebts()` will not pay any profit to Lender and the premium in sharingProfitTokenAmts will also not be paid to Lender, which means that the premium paid by the liquidator will be locked in the contract.\\n```\\n function _shareProfitsAndRepayAllDebts( address _positionManager, address _posOwner, uint _posId,\\n int _netPnLE36, uint[] memory _shareProfitAmts, address[] memory _tokens,\\n OpenTokenInfo[] memory _openTokenInfos\\n ) internal {\\n // 0. load states\\n address _lendingProxy = lendingProxy;\\n // 1. if net pnl is positive, share profits to lending proxy\\n if (_netPnLE36 > 0) {\\n for (uint i; i < _shareProfitAmts.length; ) {\\n if (_shareProfitAmts[i] > 0) {\\n ILendingProxy(_lendingProxy).shareProfit(_tokens[i], _shareProfitAmts[i]);\\n }\\n unchecked {\\n ++i;\\n }\\n }\\n emit ProfitShared(_posOwner, _posId, _tokens, _shareProfitAmts);\\n }\\n```\\n\\nAlso, when the position is closed, the tokens in the contract will be sent to the caller, so the next person who closes the position will get the locked tokens.\\n```\\n underlyingAmts = new uint[](underlyingTokens.length);\\n for (uint i; i < underlyingTokens.length; ) {\\n underlyingAmts[i] = IERC20(underlyingTokens[i]).balanceOf(address(this));\\n if (underlyingAmts[i] < _params.minUnderlyingAmts[i]) {\\n revert TokenAmountLessThanExpected(\\n underlyingTokens[i],\\n underlyingAmts[i],\\n _params.minUnderlyingAmts[i]\\n );\\n }\\n _doRefund(underlyingTokens[i], underlyingAmts[i]);\\n unchecked {\\n ++i;\\n }\\n```\\n
Modify `shareProfitsAndRepayAllDebts()` as follows:\\n```\\n function _shareProfitsAndRepayAllDebts(\\n address _positionManager,\\n address _posOwner,\\n uint _posId,\\n int _netPnLE36,\\n uint[] memory _shareProfitAmts,\\n address[] memory _tokens,\\n OpenTokenInfo[] memory _openTokenInfos\\n ) internal {\\n // 0. load states\\n address _lendingProxy = lendingProxy;\\n // 1. if net pnl is positive, share profits to lending proxy\\n - if (_netPnLE36 > 0) {\\n for (uint i; i < _shareProfitAmts.length; ) {\\n if (_shareProfitAmts[i] > 0) {\\n ILendingProxy(_lendingProxy).shareProfit(_tokens[i], _shareProfitAmts[i]);\\n }\\n unchecked {\\n ++i;\\n }\\n }\\n emit ProfitShared(_posOwner, _posId, _tokens, _shareProfitAmts);\\n - }\\n```\\n
null
```\\n (\\n netPnLE36,\\n lenderProfitUSDValueE36,\\n borrowTotalUSDValueE36,\\n positionOpenUSDValueE36,\\n sharingProfitTokenAmts ) = calcProfitInfo(_positionManager, _user, _posId);\\n // 2. add liquidation premium to the shared profit amounts\\n uint lenderLiquidatationPremiumBPS = IConfig(config).lenderLiquidatePremiumBPS();\\n for (uint i; i < sharingProfitTokenAmts.length; ) {\\n sharingProfitTokenAmts[i] +=\\n (pos.openTokenInfos[i].borrowAmt * lenderLiquidatationPremiumBPS) / BPS;\\n unchecked {\\n ++i;\\n }\\n }\\n```\\n
The liquidated person can make the liquidator lose premium by adding collateral in advance
high
When the position with debtRatioE18 >= 1e18 or startLiqTimestamp ! = 0, the position can be liquidated. On liquidation, the liquidator needs to pay premium, but the profit is related to the position's health factor and deltaTime, and when discount == 0, the liquidator loses premium.\\n```\\n uint deltaTime;\\n // 1.1 check the amount of time since position is marked\\n if (pos.startLiqTimestamp > 0) {\\n deltaTime = Math.max(deltaTime, block.timestamp - pos.startLiqTimestamp);\\n }\\n // 1.2 check the amount of time since position is past the deadline\\n if (block.timestamp > pos.positionDeadline) {\\n deltaTime = Math.max(deltaTime, block.timestamp - pos.positionDeadline);\\n }\\n // 1.3 cap time-based discount, as configured\\n uint timeDiscountMultiplierE18 = Math.max(\\n IConfig(config).minLiquidateTimeDiscountMultiplierE18(),\\n ONE_E18 - deltaTime * IConfig(config).liquidateTimeDiscountGrowthRateE18()\\n );\\n // 2. calculate health-based discount factor\\n uint curHealthFactorE18 = (ONE_E18 * ONE_E18) /\\n getPositionDebtRatioE18(_positionManager, _user, _posId);\\n uint minDesiredHealthFactorE18 = IConfig(config).minDesiredHealthFactorE18s(strategy);\\n // 2.1 interpolate linear health discount factor (according to the diagram in documentation)\\n uint healthDiscountMultiplierE18 = ONE_E18;\\n if (curHealthFactorE18 < ONE_E18) {\\n healthDiscountMultiplierE18 = curHealthFactorE18 > minDesiredHealthFactorE18\\n ? ((curHealthFactorE18 - minDesiredHealthFactorE18) * ONE_E18) /\\n (ONE_E18 - minDesiredHealthFactorE18)\\n : 0;\\n }\\n // 3. final liquidation discount = apply the two discount methods together\\n liquidationDiscountMultiplierE18 =\\n (timeDiscountMultiplierE18 * healthDiscountMultiplierE18) /\\n ONE_E18;\\n```\\n\\nConsider the following scenario.\\nAlice notices Bob's position with debtRatioE18 >= 1e18 and calls `liquidatePosition()` to liquidate.\\nBob observes Alice's transaction, frontruns a call `markLiquidationStatus()` to make startLiqTimestamp == block.timestamp, and calls `adjustExtraColls()` to bring the position back to the health state.\\nAlice's transaction is executed, and since the startLiqTimestamp of Bob's position.startLiqTimestamp ! = 0, it can be liquidated, but since discount = 0, Alice loses premium. This breaks the protocol's liquidation mechanism and causes the liquidator not to launch liquidation for fear of losing assets, which will lead to more bad debts
Consider having the liquidated person bear the premium, or at least have the liquidator use the minDiscount parameter to set the minimum acceptable discount.
null
```\\n uint deltaTime;\\n // 1.1 check the amount of time since position is marked\\n if (pos.startLiqTimestamp > 0) {\\n deltaTime = Math.max(deltaTime, block.timestamp - pos.startLiqTimestamp);\\n }\\n // 1.2 check the amount of time since position is past the deadline\\n if (block.timestamp > pos.positionDeadline) {\\n deltaTime = Math.max(deltaTime, block.timestamp - pos.positionDeadline);\\n }\\n // 1.3 cap time-based discount, as configured\\n uint timeDiscountMultiplierE18 = Math.max(\\n IConfig(config).minLiquidateTimeDiscountMultiplierE18(),\\n ONE_E18 - deltaTime * IConfig(config).liquidateTimeDiscountGrowthRateE18()\\n );\\n // 2. calculate health-based discount factor\\n uint curHealthFactorE18 = (ONE_E18 * ONE_E18) /\\n getPositionDebtRatioE18(_positionManager, _user, _posId);\\n uint minDesiredHealthFactorE18 = IConfig(config).minDesiredHealthFactorE18s(strategy);\\n // 2.1 interpolate linear health discount factor (according to the diagram in documentation)\\n uint healthDiscountMultiplierE18 = ONE_E18;\\n if (curHealthFactorE18 < ONE_E18) {\\n healthDiscountMultiplierE18 = curHealthFactorE18 > minDesiredHealthFactorE18\\n ? ((curHealthFactorE18 - minDesiredHealthFactorE18) * ONE_E18) /\\n (ONE_E18 - minDesiredHealthFactorE18)\\n : 0;\\n }\\n // 3. final liquidation discount = apply the two discount methods together\\n liquidationDiscountMultiplierE18 =\\n (timeDiscountMultiplierE18 * healthDiscountMultiplierE18) /\\n ONE_E18;\\n```\\n
First depositor can steal asset tokens of others
high
The first depositor can be front run by an attacker and as a result will lose a considerable part of the assets provided. When the pool has no share supply, in `_mintInternal()`, the amount of shares to be minted is equal to the assets provided. An attacker can abuse of this situation and profit of the rounding down operation when calculating the amount of shares if the supply is non-zero.\\n```\\n function _mintInternal(address _receiver, uint _balanceIncreased, uint _totalAsset\\n ) internal returns (uint mintShares) {\\n unfreezeTime[_receiver] = block.timestamp + mintFreezeInterval;\\n if (freezeBuckets.interval > 0) {\\n FreezeBuckets.addToFreezeBuckets(freezeBuckets, _balanceIncreased.toUint96());\\n }\\n uint _totalSupply = totalSupply();\\n if (_totalAsset == 0 || _totalSupply == 0) {\\n mintShares = _balanceIncreased + _totalAsset;\\n } else {\\n mintShares = (_balanceIncreased * _totalSupply) / _totalAsset;\\n }\\n if (mintShares == 0) {\\n revert ZeroAmount();\\n }\\n _mint(_receiver, mintShares);\\n }\\n```\\n\\nConsider the following scenario.\\nAlice wants to deposit 2M * 1e6 USDC to a pool.\\nBob observes Alice's transaction, frontruns to deposit 1 wei USDC to mint 1 wei share, and transfers 1 M * 1e6 USDC to the pool.\\nAlice's transaction is executed, since _totalAsset = 1M * 1e6 + 1 and totalSupply = 1, Alice receives 2M * 1e6 * 1 / (1M * 1e6 + 1) = 1 share.\\nThe pool now has 3M*1e6 +1 assets and distributed 2 shares. Bob profits 0.5 M and Alice loses 0.5 M USDC.
When _totalSupply == 0, send the first min liquidity LP tokens to the zero address to enable share dilution Another option is to use the ERC4626 implementation(https://github.com/OpenZeppelin/openzeppelin-contracts/blob/master/contracts/token/ERC20/extensions/ERC4626.sol#L199C14-L208) from OZ.
null
```\\n function _mintInternal(address _receiver, uint _balanceIncreased, uint _totalAsset\\n ) internal returns (uint mintShares) {\\n unfreezeTime[_receiver] = block.timestamp + mintFreezeInterval;\\n if (freezeBuckets.interval > 0) {\\n FreezeBuckets.addToFreezeBuckets(freezeBuckets, _balanceIncreased.toUint96());\\n }\\n uint _totalSupply = totalSupply();\\n if (_totalAsset == 0 || _totalSupply == 0) {\\n mintShares = _balanceIncreased + _totalAsset;\\n } else {\\n mintShares = (_balanceIncreased * _totalSupply) / _totalAsset;\\n }\\n if (mintShares == 0) {\\n revert ZeroAmount();\\n }\\n _mint(_receiver, mintShares);\\n }\\n```\\n
The attacker can use larger dust when opening a position to perform griefing attacks
high
When opening a position, unused assets are sent to dustVault as dust, but since these dust are not subtracted from inputAmt, they are included in the calculation of positionOpenUSDValueE36, resulting in a small netPnLE36, which can be used by an attacker to perform a griefing attack.\\n```\\n uint inputTotalUSDValueE36;\\n for (uint i; i < openTokenInfos.length; ) {\\n inputTotalUSDValueE36 += openTokenInfos[i].inputAmt * tokenPriceE36s[i];\\n borrowTotalUSDValueE36 += openTokenInfos[i].borrowAmt * tokenPriceE36s[i];\\n unchecked {\\n ++i;\\n }\\n }\\n // 1.3 calculate net pnl (including strategy users & borrow profit)\\n positionOpenUSDValueE36 = inputTotalUSDValueE36 + borrowTotalUSDValueE36;\\n netPnLE36 = positionCurUSDValueE36.toInt256() - positionOpenUSDValueE36.toInt256();\\n```\\n
Consider subtracting dust from inputAmt when opening a position.
null
```\\n uint inputTotalUSDValueE36;\\n for (uint i; i < openTokenInfos.length; ) {\\n inputTotalUSDValueE36 += openTokenInfos[i].inputAmt * tokenPriceE36s[i];\\n borrowTotalUSDValueE36 += openTokenInfos[i].borrowAmt * tokenPriceE36s[i];\\n unchecked {\\n ++i;\\n }\\n }\\n // 1.3 calculate net pnl (including strategy users & borrow profit)\\n positionOpenUSDValueE36 = inputTotalUSDValueE36 + borrowTotalUSDValueE36;\\n netPnLE36 = positionCurUSDValueE36.toInt256() - positionOpenUSDValueE36.toInt256();\\n```\\n
An attacker can increase liquidity to the position's UniswapNFT to prevent the position from being closed
high
UniswapV3NPM allows the user to increase liquidity to any NFT.\\n```\\n function increaseLiquidity(IncreaseLiquidityParams calldata params)\\n external payable override checkDeadline(params.deadline)\\n returns (\\n uint128 liquidity, uint256 amount0, uint256 amount1)\\n {\\n Position storage position = _positions[params.tokenId];\\n PoolAddress.PoolKey memory poolKey = _poolIdToPoolKey[position.poolId];\\n IUniswapV3Pool pool;\\n (liquidity, amount0, amount1, pool) = addLiquidity(\\n```\\n\\nWhen closing a position, in `_redeemPosition()`, only the initial liquidity of the NFT will be decreased, and then the NFT will be burned.\\n```\\n function _redeemPosition(\\n address _user, uint _posId\\n ) internal override returns (address[] memory rewardTokens, uint[] memory rewardAmts) {\\n address _positionManager = positionManager;\\n uint128 collAmt = IUniswapV3PositionManager(_positionManager).getPositionCollAmt(_user, \\n _posId);\\n // 1. take lp & extra coll tokens from lending proxy\\n _takeAllCollTokens(_positionManager, _user, _posId, address(this));\\n UniV3ExtraPosInfo memory extraPosInfo = IUniswapV3PositionManager(_positionManager)\\n .getDecodedExtraPosInfo(_user, _posId);\\n address _uniswapV3NPM = uniswapV3NPM; // gas saving\\n // 2. remove underlying tokens from lp (internal remove in NPM)\\n IUniswapV3NPM(_uniswapV3NPM).decreaseLiquidity(\\n IUniswapV3NPM.DecreaseLiquidityParams({\\n tokenId: extraPosInfo.uniV3PositionId,liquidity: collAmt, amount0Min: 0,\\n amount1Min: 0,\\n deadline: block.timestamp\\n })\\n );\\n // rest of code\\n // 4. burn LP position\\n IUniswapV3NPM(_uniswapV3NPM).burn(extraPosInfo.uniV3PositionId);\\n }\\n```\\n\\nIf the liquidity of the NFT is not 0, burning will fail.\\n```\\n function burn(uint256 tokenId) external payable override isAuthorizedForToken(tokenId) {\\n Position storage position = _positions[tokenId];\\n require(position.liquidity == 0 && position.tokensOwed0 == 0 && position.tokensOwed1 == 0,'Not cleared');\\n delete _positions[tokenId];\\n _burn(tokenId);\\n }\\n```\\n\\nThis allows an attacker to add 1 wei liquidity to the position's NFT to prevent the position from being closed, and later when the position expires, the attacker can liquidate it.
Consider decreasing the actual liquidity(using uniswapV3NPM.positions to get it) of the NFT in `_redeemPosition()`, instead of the initial liquidity
null
```\\n function increaseLiquidity(IncreaseLiquidityParams calldata params)\\n external payable override checkDeadline(params.deadline)\\n returns (\\n uint128 liquidity, uint256 amount0, uint256 amount1)\\n {\\n Position storage position = _positions[params.tokenId];\\n PoolAddress.PoolKey memory poolKey = _poolIdToPoolKey[position.poolId];\\n IUniswapV3Pool pool;\\n (liquidity, amount0, amount1, pool) = addLiquidity(\\n```\\n
SwapHelper.getCalldata should check whitelistedRouters[_router]
medium
`SwapHelper.getCalldata()` returns data for swap based on the input, and uses whitelistedRouters to limit the _router param. The issue here is that when `setWhitelistedRouters()` sets the _routers state to false, it does not reset the data in routerTypes and swapInfos, which results in the router still being available in `getCalldata()`. As a result, users can still swap with invalid router data.\\n```\\n for (uint i; i < _statuses.length; ) {\\n whitelistedRouters[_routers[i]] = _statuses[i];\\n if (_statuses[i]) {\\n routerTypes[_routers[i]] = _types[i];\\n emit SetRouterType(_routers[i], _types[i]);\\n }\\n emit SetWhitelistedRouter(_routers[i], _statuses[i]);\\n unchecked {\\n ++i;\\n }\\n }\\n```\\n
Consider checking whitelistedRouters[_router] in SwapHelper.getCalldata()
null
```\\n for (uint i; i < _statuses.length; ) {\\n whitelistedRouters[_routers[i]] = _statuses[i];\\n if (_statuses[i]) {\\n routerTypes[_routers[i]] = _types[i];\\n emit SetRouterType(_routers[i], _types[i]);\\n }\\n emit SetWhitelistedRouter(_routers[i], _statuses[i]);\\n unchecked {\\n ++i;\\n }\\n }\\n```\\n
The swap when closing a position does not consider shareProfitAmts
medium
When closing a position, token swap is performed to ensure that the closer can repay the debt, for example, when operation == EXACT_IN, tokens of borrowAmt are required to be excluded from the swap, and when operation == EXACT_OUT, tokens of borrowAmt are required to be swapped. The issue here is that the closer needs to pay not only the borrowAmt but also the shareProfitAmts, which causes the closure to fail when percentSwapE18 = 100% due to insufficient tokens. Although the closer can adjust the percentSwapE18 to make the closure successful, it greatly increases the complexity.\\n```\\n for (uint i; i < swapParams.length; ) {\\n // find excess amount after repay\\n uint swapAmt = swapParams[i].operation == SwapOperation.EXACT_IN\\n ? IERC20(swapParams[i].tokenIn).balanceOf(address(this)) - openTokenInfos[i].borrowAmt\\n : openTokenInfos[i].borrowAmt - IERC20(swapParams[i].tokenOut).balanceOf(address(this));\\n swapAmt = (swapAmt * swapParams[i].percentSwapE18) / ONE_E18\\n if (swapAmt == 0) {\\n revert SwapZeroAmount();\\n }\\n```\\n
Consider taking shareProfitAmts into account when calculating swapAmt
null
```\\n for (uint i; i < swapParams.length; ) {\\n // find excess amount after repay\\n uint swapAmt = swapParams[i].operation == SwapOperation.EXACT_IN\\n ? IERC20(swapParams[i].tokenIn).balanceOf(address(this)) - openTokenInfos[i].borrowAmt\\n : openTokenInfos[i].borrowAmt - IERC20(swapParams[i].tokenOut).balanceOf(address(this));\\n swapAmt = (swapAmt * swapParams[i].percentSwapE18) / ONE_E18\\n if (swapAmt == 0) {\\n revert SwapZeroAmount();\\n }\\n```\\n
The freeze mechanism reduces the borrowableAmount, which reduces Lender's yield
medium
The contract has two freeze intervals, mintFreezeInterval and freezeBuckets.interval, the former to prevent users from making flash accesses and the latter to prevent borrowers from running out of funds. Both freeze intervals are applied when a user deposits, and due to the difference in unlocking time, it significantly reduces borrowableAmount and thus reduces Lender's yield.\\n```\\n function _mintInternal(address _receiver,uint _balanceIncreased, uint _totalAsset\\n ) internal returns (uint mintShares) {\\n unfreezeTime[_receiver] = block.timestamp + mintFreezeInterval;\\n if (freezeBuckets.interval > 0) {\\n FreezeBuckets.addToFreezeBuckets(freezeBuckets, _balanceIncreased.toUint96());\\n }\\n```\\n\\nConsider freezeBuckets.interval == mintFreezeInterval = 1 day, 100 ETH in the LendingPool, and borrowableAmount = 100 ETH. At day 0 + 1s, Alice deposits 50 ETH, borrowableAmount = 150 ETH**-** lockedAmount(50 ETH) = 100 ETH, the 50 ETH frozen in freezeBuckets will be unlocked on day 2, while unfreezeTime[alice] = day 1 + 1s. At day 1 + 1s, unfreezeTime[Alice] is reached, Alice can withdraw 50 ETH, borrowableAmount = 100 ETH - LockedAmount(50 ETH) = 50 ETH. If Bob wants to borrow the available funds in the Pool at this time, Bob can only borrow 50 ETH, while the available funds are actually 100 ETH, which will reduce Lender's yield by half. At day 2 + 1s, freezeBuckets is unfrozen and borrowableAmount = 100 ETH -LockedAmount(0 ETH) = 100 ETH.
Consider making mintFreezeInterval >= 2 * freezeBuckets.interval, which makes unfreezeTime greater than the unfreeze time of freezeBuckets.
null
```\\n function _mintInternal(address _receiver,uint _balanceIncreased, uint _totalAsset\\n ) internal returns (uint mintShares) {\\n unfreezeTime[_receiver] = block.timestamp + mintFreezeInterval;\\n if (freezeBuckets.interval > 0) {\\n FreezeBuckets.addToFreezeBuckets(freezeBuckets, _balanceIncreased.toUint96());\\n }\\n```\\n
A malicious operator can drain the vault funds in one transaction
high
The vault operator can swap tokens using the `trade()` function. They pass the following structure for each trade:\\n```\\n struct tradeInput { \\n address spendToken;\\n address receiveToken;\\n uint256 spendAmt;\\n uint256 receiveAmtMin;\\n address routerAddress;\\n uint256 pathIndex;\\n }\\n```\\n\\nNotably, receiveAmtMin is used to guarantee acceptable slippage. An operator can simply pass 0 to make sure the trade is executed. This allows an operator to steal all the funds in the vault by architecting a sandwich attack.\\nFlashloan a large amount of funds\\nSkew the token proportions in a pool which can be used for trading, by almost completely depleting the target token.\\nPerform the trade at >99% slippage\\nSell target tokens for source tokens on the manipulated pool, returning to the original ratio.\\nPay off the flashloan, and keep the tokens traded at 99% slippage. In fact, this attack can be done in one TX, different to most sandwich attacks.
The contract should enforce sensible slippage parameters.
null
```\\n struct tradeInput { \\n address spendToken;\\n address receiveToken;\\n uint256 spendAmt;\\n uint256 receiveAmtMin;\\n address routerAddress;\\n uint256 pathIndex;\\n }\\n```\\n
A malicious operator can steal all user deposits
high
In the Orbital architecture, each Vault user has a numerator which represents their share of the vault holdings. The denominator is by design the sum of all numerators of users, an invariant kept at deposits and withdrawals. For maximum precision, the denominator should be a very large value. Intuitively, numerators could be spread across different users without losing precision. The critical calculations occur in these lines in deposit():\\n```\\n if (D == 0) { //initial deposit\\n uint256 sumDenoms = 0; \\n for (uint256 i = 0; i < tkns.length; i++) {\\n sumDenoms += \\n AI.getAllowedTokenInfo(tkns[i]).initialDenominator;\\n }\\n require(sumDenoms > 0 && sumDenoms <= maxInitialDenominator, \\n "invalid sumDenoms");\\n deltaN = sumDenoms; //initial numerator and denominator are the \\n same, and are greater than any possible balance in the vault.\\n //this ensures precision in the vault's \\n balances. User Balance = (N*T)/D will have rounding errors always 1 \\n wei or less. \\n } else { \\n // deltaN = (amt * D)/T;\\n deltaN = Arithmetic.overflowResistantFraction(amt, D, T);\\n }\\n```\\n\\nIn the initial deposit, Vault sums all token initialDenominators to get the final denominator. It is assumed that the vault will never have this amount in total balances (each token denominator is worth around $100m dollars).\\nIn any other deposit, the deltaN (numerator) credited to the depositor is (denominator * deposit amount / existing balance). When denominator is huge, this calculation is highly precise. However, when denominator is 1, a serious issue oc**curs. If user's deposit amount is one wei smaller than existing balance, deltaN would be zero. This property has lead to the well-known ERC4626 inflation attack, where an attacker donates (sends directly to the contract) an amount so that the following deposit is consumed without any shares given to the user. In fact, it is possible to reduce the denominator to 1 and resurrect that attack. The root cause is that the initial deposit denominator is not linear to the deposit amount. Consider the attack flow below, done by a malicious operator:\\nDeploy an ETH/BTC pool\\nFlash loan $100mm in ETH and BTC each\\nPerform an initial deposit of $100mm in ETH/BTC\\nFrom another account, deposit 1 wei ETH / BTC -> receive 1 deltaN\\nWithdraw 100% as operator, reducing denominator to 1.\\nPay off flash loan\\nWait for victim deposits\\nWhen a deposit arrives at the mempool, frontrun with a donation of an equivalent amount. The victim will not receive any shares ( numerator).\\nAny future deposits can be frontran again. Any deposit of less than the current balance will be lost.
Consider checking that user's received deltaN is reasonable. Calculate the expected withdrawable value (deltaN / denominator * balance), and verify that is close enough to the deposited amount.
null
```\\n if (D == 0) { //initial deposit\\n uint256 sumDenoms = 0; \\n for (uint256 i = 0; i < tkns.length; i++) {\\n sumDenoms += \\n AI.getAllowedTokenInfo(tkns[i]).initialDenominator;\\n }\\n require(sumDenoms > 0 && sumDenoms <= maxInitialDenominator, \\n "invalid sumDenoms");\\n deltaN = sumDenoms; //initial numerator and denominator are the \\n same, and are greater than any possible balance in the vault.\\n //this ensures precision in the vault's \\n balances. User Balance = (N*T)/D will have rounding errors always 1 \\n wei or less. \\n } else { \\n // deltaN = (amt * D)/T;\\n deltaN = Arithmetic.overflowResistantFraction(amt, D, T);\\n }\\n```\\n
Removing a trade path in router will cause serious data corruption
medium
The RouterInfo represents a single UniV3-compatible router which supports a list of token paths. It uses the following data structures:\\n```\\n mapping(address => mapping(address => listInfo)) private allowedPairsMap;\\n pair[] private allowedPairsList;\\n```\\n\\n```\\n struct listInfo {\\n bool allowed;\\n uint256 listPosition;\\n }\\n struct pair {\\n address token0;\\n address token1;\\n uint256 numPathsAllowed;\\n }\\n```\\n\\nWhen an admin specifies a new path from token0 to token1, `_increasePairPaths()` is called.\\n```\\n function _increasePairPaths(address token0, address token1) private {\\n listInfo storage LI = allowedPairsMap[token0][token1];\\n if (!LI.allowed){\\n LI.allowed = true;\\n LI.listPosition = allowedPairsList.length;\\n allowedPairsList.push(pair(token0, token1, 0));\\n }\\n allowedPairsList[LI.listPosition].numPathsAllowed++;\\n }\\n```\\n\\nWhen a path is removed, the complementary function is called.\\n```\\n function _decreasePairPaths(address token0, address token1) private {\\n listInfo storage LI = allowedPairsMap[token0][token1];\\n require(LI.allowed, "RouterInfo: pair not allowed");\\n allowedPairsList[LI.listPosition].numPathsAllowed--;\\n if (allowedPairsList[LI.listPosition].numPathsAllowed == 0){\\n allowedPairsList[LI.listPosition] = \\n allowedPairsList[allowedPairsList.length - 1];\\n allowedPairsList.pop();\\n LI.allowed = false;\\n }\\n }\\n```\\n\\nWhen the last path is removed, the contract reuses the index of the removed pair, to store the last pair in the list. It then removes the last pair, having already copied it. The issue is that the corresponding listInfo structure is not updated, to keep track of index in the pairs list. Future usage of the last pair will use a wrong index, which at this moment, is over the array bounds. When a new pair will be created, it will share the index with the corrupted pair. This can cause a variety of serious issues. For example, it will not be possible to remove paths from the corrupted pair until a new pair is created, at which point the new pair will have a wrong numPathsAllowed as it is shared.
Update the listPosition member of the last pair in the list, before repositioning it.
null
```\\n mapping(address => mapping(address => listInfo)) private allowedPairsMap;\\n pair[] private allowedPairsList;\\n```\\n
Attacker can DOS deposit transactions due to strict verifications
medium
When users deposit funds to the Vault, it verifies that the proportion between the tokens inserted to the vault matches the current vault token balances.\\n```\\n uint256[] memory balances = vlt.balances();\\n //ensure deposits are in the same ratios as the vault's current balances\\n require(functions.ratiosMatch(balances, amts), "ratios don't match");\\n```\\n\\nThe essential part of the check is below:\\n```\\n for (uint256 i = 0; i < sourceRatios.length; i++) {\\n // if (targetRatios[i] != (targetRatios[greatestIndex] * \\n sourceRatios[i]) / greatest) {\\n if (targetRatios[i] != \\n Arithmetic.overflowResistantFraction(targetRatios[greatestIndex], sourceRatios[i], greatest)) {\\n return false;\\n }\\n }\\n```\\n\\nThe exact logic here is not important, but note that a small change in the balance of one of the vault tokens will affect the expected number of tokens that need to be inserted to maintain correct ratio. The exact amounts to be deposited are passed as targetRatios, and sourceRatios is the current balances. Therefore, an attacker can directly transfer a negligible amount of some vault token to the contract to make the amount the user specified in targetRatios not line up with the expected proportion. As a result, the deposit would revert. Essentially it is an abuse of the over-granular verification of ratios, leading to a DOS of any deposit in the mempool.
Loosen the restriction on deposit ratios. A DOS attack should cost an amount that the vault creditors would be happy to live with.
null
```\\n uint256[] memory balances = vlt.balances();\\n //ensure deposits are in the same ratios as the vault's current balances\\n require(functions.ratiosMatch(balances, amts), "ratios don't match");\\n```\\n
User deposits can fail despite using the correct method for calculation of deposit amounts
medium
Users can use the `getAmtsNeededForDeposit()` function to get the amount of tokens that maintain the desired proportion for vault deposits. It will perform a calculation very similar to the one in `ratiosMatch()`, which will verify the deposit.\\n```\\n for (uint256 i = 0; i < balances.length; i++) {\\n if (i == indexOfReferenceToken) {\\n amtsNeeded[i] = amtIn;\\n } else {\\n // amtsNeeded[i] = (amtIn * balances[i]) / \\n balances[indexOfReferenceToken];\\n amtsNeeded[i] = Arithmetic.overflowResistantFraction(amtIn, \\n balances[i], balances[indexOfReferenceToken]);\\n }\\n }\\n```\\n\\nHowever, a difference between the verification function and the getter function is that the getter receives any reference token, while the verification will use proportions based on the deposit amount in the largest balance in the vault. Indeed, these fractions may differ by a small amount. This could cause the `getAmtsNeededForDeposit()` function to respond with values which will not be accepted at deposit, since they will be rounded differently.
Calculation amounts needed using the ratio between largest balance and the deposit amount. This would line up the numbers as verification would expect.
null
```\\n for (uint256 i = 0; i < balances.length; i++) {\\n if (i == indexOfReferenceToken) {\\n amtsNeeded[i] = amtIn;\\n } else {\\n // amtsNeeded[i] = (amtIn * balances[i]) / \\n balances[indexOfReferenceToken];\\n amtsNeeded[i] = Arithmetic.overflowResistantFraction(amtIn, \\n balances[i], balances[indexOfReferenceToken]);\\n }\\n }\\n```\\n
Several popular ERC20 tokens are incompatible with the vault due to MAX approve
low
There are several instances where the vault approves use of funds to the manager or a trade router. It will set approval to MAX_UINT256.\\n```\\n for (uint i = 0; i < tokens.length; i++) {\\n //allow vault manager to withdraw tokens\\n IERC20(tokens[i]).safeIncreaseAllowance(ownerIn, \\n type(uint256).max); \\n }\\n```\\n\\nThe issue is that there are several popular tokens(https://github.com/d-xo/weird-erc20#revert-on-large-approvals--transfers) (UNI, COMP and others) which do not support allowances of above UINT_96. The contract will not be able to interoperate with them.
Consider setting allowance to UINT_96. Whenever the allowance is consumed, perform re-approval up to UINT_96.
null
```\\n for (uint i = 0; i < tokens.length; i++) {\\n //allow vault manager to withdraw tokens\\n IERC20(tokens[i]).safeIncreaseAllowance(ownerIn, \\n type(uint256).max); \\n }\\n```\\n
Attacker can freeze deposits and withdrawals indefinitely by submitting a bad withdrawal
high
Users request to queue a withdrawal using the function below in Vault.\\n```\\n function addWithdrawRequest(uint256 _amountMLP, address _token) external {\\n require(isAcceptingToken(_token), "ERROR: Invalid token");\\n require(_amountMLP != 0, "ERROR: Invalid amount");\\n \\n address _withdrawer = msg.sender;\\n // Get the pending buffer and staged buffer.\\n RequestBuffer storage _pendingBuffer = _requests(false);\\n RequestBuffer storage _stagedBuffer = _requests(true);\\n // Check if the withdrawer have enough balance to withdraw.\\n uint256 _bookedAmountMLP = _stagedBuffer.withdrawAmountPerUser[_withdrawer] + \\n _pendingBuffer.withdrawAmountPerUser[_withdrawer];\\n require(_bookedAmountMLP + _amountMLP <= \\n MozaicLP(mozLP).balanceOf(_withdrawer), "Withdraw amount > amount MLP");\\n …\\n emit WithdrawRequestAdded(_withdrawer, _token, chainId, _amountMLP);\\n }\\n```\\n\\nNotice that the function only validates that the user has a sufficient LP token balance to withdraw at the moment of execution. After it is queued up, a user can move their tokens to another wallet. Later in `_settleRequests()`, the Vault will attempt to burn user's tokens:\\n```\\n // Burn moazic LP token.\\n MozaicLP(mozLP).burn(request.user, _mlpToBurn);\\n```\\n\\nThis would revert and block any other settlements from occurring. Therefore, users can block the entire settlement process by requesting a tiny withdrawal amount in every epoch and moving funds to another wallet.
Vault should take custody of user's LP tokens when they request withdrawals. If the entire withdrawal cannot be satisfied, it can refund some tokens back to the user.
null
```\\n function addWithdrawRequest(uint256 _amountMLP, address _token) external {\\n require(isAcceptingToken(_token), "ERROR: Invalid token");\\n require(_amountMLP != 0, "ERROR: Invalid amount");\\n \\n address _withdrawer = msg.sender;\\n // Get the pending buffer and staged buffer.\\n RequestBuffer storage _pendingBuffer = _requests(false);\\n RequestBuffer storage _stagedBuffer = _requests(true);\\n // Check if the withdrawer have enough balance to withdraw.\\n uint256 _bookedAmountMLP = _stagedBuffer.withdrawAmountPerUser[_withdrawer] + \\n _pendingBuffer.withdrawAmountPerUser[_withdrawer];\\n require(_bookedAmountMLP + _amountMLP <= \\n MozaicLP(mozLP).balanceOf(_withdrawer), "Withdraw amount > amount MLP");\\n …\\n emit WithdrawRequestAdded(_withdrawer, _token, chainId, _amountMLP);\\n }\\n```\\n
Removal of Multisig members will corrupt data structures
medium
The Mozaic Multisig (the senate) can remove council members using the TYPE_DEL_OWNER operation:\\n```\\n if(proposals[_proposalId].actionType == TYPE_DEL_OWNER) {\\n (address _owner) = abi.decode(proposals[_proposalId].payload, (address));\\n require(contains(_owner) != 0, "Invalid owner address");\\n uint index = contains(_owner);\\n for (uint256 i = index; i < councilMembers.length - 1; i++) {\\n councilMembers[i] = councilMembers[i + 1];\\n }\\n councilMembers.pop();\\n proposals[_proposalId].executed = true;\\n isCouncil[_owner] = false;\\n }\\n```\\n\\nThe code finds the owner's index in the councilMembers array, copies all subsequent members downwards, and deletes the last element. Finally, it deletes the isCouncil[_owner] entry. The issue is actually in the contains() function.\\n```\\n function contains(address _owner) public view returns (uint) {\\n for (uint i = 1; i <= councilMembers.length; i++) {\\n if (councilMembers[i - 1] == _owner) {\\n return i;\\n }\\n }\\n return 0;\\n }\\n```\\n\\nThe function returns the index following the owner's index. Therefore, the intended owner is not deleted from councilMembers, instead the one after it is. The `submitProposal()` and `confirmTransaction()` privileged functions will not be affected by the bug, as they filter by isCouncil. However, the corruption of councilMembers will make deleting the member following the currently deleted owner fail, as deletion relies on finding the member in councilMembers.
Fix the `contains()` function to return the correct index of _owner
null
```\\n if(proposals[_proposalId].actionType == TYPE_DEL_OWNER) {\\n (address _owner) = abi.decode(proposals[_proposalId].payload, (address));\\n require(contains(_owner) != 0, "Invalid owner address");\\n uint index = contains(_owner);\\n for (uint256 i = index; i < councilMembers.length - 1; i++) {\\n councilMembers[i] = councilMembers[i + 1];\\n }\\n councilMembers.pop();\\n proposals[_proposalId].executed = true;\\n isCouncil[_owner] = false;\\n }\\n```\\n
Attacker could abuse victim's vote to pass their own proposal
medium
Proposals are created using submitProposal():\\n```\\n function submitProposal(uint8 _actionType, bytes memory _payload) public onlyCouncil {\\n uint256 proposalId = proposalCount;\\n proposals[proposalId] = Proposal(msg.sender,_actionType, \\n _payload, 0, false);\\n proposalCount += 1;\\n emit ProposalSubmitted(proposalId, msg.sender);\\n }\\n```\\n\\nAfter submission, council members approve them by calling confirmTransaction():\\n```\\n function confirmTransaction(uint256 _proposalId) public onlyCouncil \\n notConfirmed(_proposalId) {\\n confirmations[_proposalId][msg.sender] = true;\\n proposals[_proposalId].confirmation += 1;\\n emit Confirmation(_proposalId, msg.sender);\\n }\\n```\\n\\nNotably, the _proposalId passed to `confirmTransaction()` is simply the proposalCount at time of submission. This design allows the following scenario to occur:\\nUser A submits proposal P1\\nUser B is interested in the proposal and confirms it\\nAttacker submits proposal P2\\nA blockchain re-org occurs. Submission of P1 is dropped in place of P2.\\nUser B's confirmation is applied on top of the re-orged blockchain. Attacker gets their vote. We've seen very large re-orgs in top blockchains such as Polygon, so this threat remains a possibility to be aware of.
Calculate proposalId as a hash of the proposal properties. This way, votes cannot be misdirected.
null
```\\n function submitProposal(uint8 _actionType, bytes memory _payload) public onlyCouncil {\\n uint256 proposalId = proposalCount;\\n proposals[proposalId] = Proposal(msg.sender,_actionType, \\n _payload, 0, false);\\n proposalCount += 1;\\n emit ProposalSubmitted(proposalId, msg.sender);\\n }\\n```\\n
MozToken will have a much larger fixed supply than intended.
medium
MozToken is planned to be deployed on all supported chains. Its total supply will be 1B. However, its constructor will mint 1B tokens on each deployment.\\n```\\n constructor( address _layerZeroEndpoint, uint8 _sharedDecimals\\n ) OFTV2("Mozaic Token", "MOZ", _sharedDecimals, _layerZeroEndpoint) {\\n _mint(msg.sender, 1000000000 * 10 ** _sharedDecimals);\\n isAdmin[msg.sender] = true;\\n }\\n```\\n
Pass the minted supply as a parameter. Only on the main chain, mint 1B tokens.
null
```\\n constructor( address _layerZeroEndpoint, uint8 _sharedDecimals\\n ) OFTV2("Mozaic Token", "MOZ", _sharedDecimals, _layerZeroEndpoint) {\\n _mint(msg.sender, 1000000000 * 10 ** _sharedDecimals);\\n isAdmin[msg.sender] = true;\\n }\\n```\\n
Theoretical reentrancy attack when TYPE_MINT_BURN proposals are executed
low
The senate can pass a proposal to mint or burn tokens.\\n```\\n if(proposals[_proposalId].actionType == TYPE_MINT_BURN) {\\n (address _token, address _to, uint256 _amount, bool _flag) = \\n abi.decode(proposals[_proposalId].payload, (address, address, uint256, bool));\\n if(_flag) {\\n IXMozToken(_token).mint(_amount, _to);\\n } else {\\n IXMozToken(_token).burn(_amount, _to);\\n }\\n proposals[_proposalId].executed = true;\\n }\\n```\\n\\nNote that the proposal is only marked as executed at the end of execution, but execution is checked at the start of the function.\\n```\\n function execute(uint256 _proposalId) public onlyCouncil {\\n require(proposals[_proposalId].executed == false, "Error: \\n Proposal already executed.");\\n require(proposals[_proposalId].confirmation >= threshold, "Error: Not enough confirmations.");\\n```\\n\\nInteraction with tokens should generally be assumed to grant arbitrary call execution to users. If the mint or `burn()` calls call `execute()` again, the proposal will be executed twice, resulting in double the amount minted or burned. Specifically for XMoz, it is not anticipated to yield execution to the to address, so the threat remains theoretical.
Follow the Check-Effects-Interactions design pattern, mark the function as executed at the start.
null
```\\n if(proposals[_proposalId].actionType == TYPE_MINT_BURN) {\\n (address _token, address _to, uint256 _amount, bool _flag) = \\n abi.decode(proposals[_proposalId].payload, (address, address, uint256, bool));\\n if(_flag) {\\n IXMozToken(_token).mint(_amount, _to);\\n } else {\\n IXMozToken(_token).burn(_amount, _to);\\n }\\n proposals[_proposalId].executed = true;\\n }\\n```\\n
XMozToken permits transfers from non-whitelisted addresses
low
The XMozToken is documented to forbid transfers except from whitelisted addresses or mints.\\n```\\n /**\\n * @dev Hook override to forbid transfers except from whitelisted \\n addresses and minting\\n */\\n function _beforeTokenTransfer(address from, address to, uint256 \\n /*amount*/) internal view override {\\n require(from == address(0) || _transferWhitelist.contains(from) \\n || _transferWhitelist.contains(to), "transfer: not allowed");\\n }\\n```\\n\\nHowever, as can be seen, non-whitelisted users can still transfer tokens, so long as it is to whitelisted destinations.
Remove the additional check in `_beforeTokenTransfer()`, or update the documentation accordingly.
null
```\\n /**\\n * @dev Hook override to forbid transfers except from whitelisted \\n addresses and minting\\n */\\n function _beforeTokenTransfer(address from, address to, uint256 \\n /*amount*/) internal view override {\\n require(from == address(0) || _transferWhitelist.contains(from) \\n || _transferWhitelist.contains(to), "transfer: not allowed");\\n }\\n```\\n
XMozToken cannot be added to its own whitelist
low
By design, XMozToken should always be in the whitelist. However, `updateTransferWhitelist()` implementation forbids both removal and insertion of XMozToken to the whitelist.\\n```\\n function updateTransferWhitelist(address account, bool add) external onlyMultiSigAdmin {\\n require(account != address(this), "updateTransferWhitelist: \\n Cannot remove xMoz from whitelist");\\n if(add) _transferWhitelist.add(account);\\n else _transferWhitelist.remove(account);\\n emit SetTransferWhitelist(account, add);\\n }\\n```\\n
Move the require statement into the else clause.
null
```\\n function updateTransferWhitelist(address account, bool add) external onlyMultiSigAdmin {\\n require(account != address(this), "updateTransferWhitelist: \\n Cannot remove xMoz from whitelist");\\n if(add) _transferWhitelist.add(account);\\n else _transferWhitelist.remove(account);\\n emit SetTransferWhitelist(account, add);\\n }\\n```\\n
User fee token balance can be drained in a single operation by a malicious bot
high
In `_buildFeeExecutable()`, BrahRouter calculates the total fee charged to the wallet. It uses tx. gas price to get the gas price specified by the bot.\\n```\\n if (feeToken == ETH) \\n {uint256 totalFee = (gasUsed + GAS_OVERHEAD_NATIVE) * tx.gasprice;\\n totalFee = _applyMultiplier(totalFee);\\n return (totalFee, recipient, TokenTransfer._nativeTransferExec(recipient, totalFee));\\n } else {uint256 totalFee = (gasUsed + GAS_OVERHEAD_ERC20) * tx.gasprice;\\n // Convert fee amount value in fee tokenuint256 feeToCollect =PriceFeedManager(_addressProvider.priceFeedManager()).getTokenXPriceInY(totalFee, ETH, feeToken);\\n feeToCollect = _applyMultiplier(feeToCollect);\\n return (feeToCollect, recipient, TokenTransfer._erc20TransferExec(feeToken, recipient, feeToCollect));}\\n```\\n
Use a gas oracle or a capped priority fee to ensure an inflated gas price down not harm the user.
null
```\\n if (feeToken == ETH) \\n {uint256 totalFee = (gasUsed + GAS_OVERHEAD_NATIVE) * tx.gasprice;\\n totalFee = _applyMultiplier(totalFee);\\n return (totalFee, recipient, TokenTransfer._nativeTransferExec(recipient, totalFee));\\n } else {uint256 totalFee = (gasUsed + GAS_OVERHEAD_ERC20) * tx.gasprice;\\n // Convert fee amount value in fee tokenuint256 feeToCollect =PriceFeedManager(_addressProvider.priceFeedManager()).getTokenXPriceInY(totalFee, ETH, feeToken);\\n feeToCollect = _applyMultiplier(feeToCollect);\\n return (feeToCollect, recipient, TokenTransfer._erc20TransferExec(feeToken, recipient, feeToCollect));}\\n```\\n
Users can drain Gelato deposit at little cost
high
In Console automation, fees are collected via the `claimExecutionFees()` modifier:\\n```\\n modifier claimExecutionFees(address _wallet) {\\n uint256 startGas = gasleft();\\n _;\\n if (feeMultiplier > 0) {\\n address feeToken = FeePayer._feeToken(_wallet);\\n uint256 gasUsed = startGas -gasleft();\\n (uint256 feeAmount, address recipient, Types.Executable memory feeTransferTxn)=FeePayer._buildFeeExecutable\\n (gasUsed, feeToken);\\n emit FeeClaimed(_wallet, feeToken, feeAmount);\\n if (feeToken != ETH) {uint256 initialBalance = IERC20(feeToken).balanceOf(recipient);_\\n executeSafeERC20Transfer(_wallet, feeTransferTxn);\\n if (IERC20(feeToken).balanceOf(recipient) -initialBalance < feeAmount){\\n revert UnsuccessfulFeeTransfer(_wallet, feeToken);}\\n } else {\\n uint256 initialBalance = recipient.balance;\\n Executor._executeOnWallet(_wallet, feeTransferTxn);\\n if (recipient.balance -initialBalance < feeAmount) {\\n revert UnsuccessfulFeeTransfer(_wallet, feeToken);\\n }\\n }\\n }\\n }\\n```\\n
When calculating fees in buildFeeExecutable(), there are assumptions about the gas cost of an ERC20 transfer and a native transfer.\\n```\\n // Keeper network overhead -150k\\n uint256 internal constant GAS_OVERHEAD_NATIVE = 150_000 + 40_000;\\n uint256 internal constant GAS_OVERHEAD_ERC20 = 150_000 + 90_000;\\n```\\n\\nA good fix would be to check the actual gas usage and require it to be under the hard cap.Team responseAdded a gas check for this attack.Mitigation reviewApplied fix has been applied.
null
```\\n modifier claimExecutionFees(address _wallet) {\\n uint256 startGas = gasleft();\\n _;\\n if (feeMultiplier > 0) {\\n address feeToken = FeePayer._feeToken(_wallet);\\n uint256 gasUsed = startGas -gasleft();\\n (uint256 feeAmount, address recipient, Types.Executable memory feeTransferTxn)=FeePayer._buildFeeExecutable\\n (gasUsed, feeToken);\\n emit FeeClaimed(_wallet, feeToken, feeAmount);\\n if (feeToken != ETH) {uint256 initialBalance = IERC20(feeToken).balanceOf(recipient);_\\n executeSafeERC20Transfer(_wallet, feeTransferTxn);\\n if (IERC20(feeToken).balanceOf(recipient) -initialBalance < feeAmount){\\n revert UnsuccessfulFeeTransfer(_wallet, feeToken);}\\n } else {\\n uint256 initialBalance = recipient.balance;\\n Executor._executeOnWallet(_wallet, feeTransferTxn);\\n if (recipient.balance -initialBalance < feeAmount) {\\n revert UnsuccessfulFeeTransfer(_wallet, feeToken);\\n }\\n }\\n }\\n }\\n```\\n
Attackers can drain users over time by donating negligible ERC20 amount
high
In the Console automation model, a strategy shall keep executing until its trigger check fails. For DCA strategies, the swapping trigger is defined as:\\n```\\n function canInitSwap(address subAccount, address inputToken, uint256 interval, uint256 lastSwap)\\n external view returns (bool)\\n {\\n if (hasZeroBalance(subAccount, inputToken)) \\n { return false;\\n }\\n return ((lastSwap + interval) < block.timestamp);\\n }\\n```\\n
Define a DUST_AMOUNT, below that amount exit is allowed, while above that amount swap execution is allowed. User should only stand to gain from another party donating ERC20 tokens to their account.
null
```\\n function canInitSwap(address subAccount, address inputToken, uint256 interval, uint256 lastSwap)\\n external view returns (bool)\\n {\\n if (hasZeroBalance(subAccount, inputToken)) \\n { return false;\\n }\\n return ((lastSwap + interval) < block.timestamp);\\n }\\n```\\n
When FeePayer is subsidizing, users can steal gas
medium
```\\nThe feeMultiplier enables the admin to subsidize or upcharge for the automation service.\\n/**\\n⦁ @notice feeMultiplier represents the total fee to be charged on the transaction\\n⦁ Is set to 100% by default\\n⦁ @dev In case feeMultiplier is less than BASE_BPS, fees charged will be less than 100%,\\n⦁ subsidizing the transaction\\n⦁ In case feeMultiplier is greater than BASE_BPS, fees charged will be greater than 100%,\\n⦁ charging the user for the transaction\\n*/ \\n uint16 public feeMultiplier = 10_000;\\n // The normal fee is calculated and then processed by the multiplier.\\n if (feeToken == ETH) {\\n uint256 totalFee = (gasUsed + GAS_OVERHEAD_NATIVE) * tx.gasprice; \\n totalFee = _applyMultiplier(totalFee);\\n return (totalFee, recipient, TokenTransfer._nativeTransferExec(recipient, totalFee));\\n } else {\\n```\\n
The root cause is that the gasUsed amount is subsidized as well as GAS_OVERHEAD_NATIVE, which is the gas reserved for the delivery from Gelato executors. By subsidizing only the Gelato gas portion, users will not gain from gas minting attacks, while the intention of improving user experience is maintained.
null
```\\nThe feeMultiplier enables the admin to subsidize or upcharge for the automation service.\\n/**\\n⦁ @notice feeMultiplier represents the total fee to be charged on the transaction\\n⦁ Is set to 100% by default\\n⦁ @dev In case feeMultiplier is less than BASE_BPS, fees charged will be less than 100%,\\n⦁ subsidizing the transaction\\n⦁ In case feeMultiplier is greater than BASE_BPS, fees charged will be greater than 100%,\\n⦁ charging the user for the transaction\\n*/ \\n uint16 public feeMultiplier = 10_000;\\n // The normal fee is calculated and then processed by the multiplier.\\n if (feeToken == ETH) {\\n uint256 totalFee = (gasUsed + GAS_OVERHEAD_NATIVE) * tx.gasprice; \\n totalFee = _applyMultiplier(totalFee);\\n return (totalFee, recipient, TokenTransfer._nativeTransferExec(recipient, totalFee));\\n } else {\\n```\\n
Strategy actions could be executed out of order due to lack of reentrancy guard
medium
The Execute module performs automation of the fetched Executable array on wallet subaccounts.\\n```\\n function _executeAutomation( address _wallet, address _subAccount, address _strategy,\\n Types.Executable[] memory _actionExecs ) internal {\\n uint256 actionLen = _actionExecs.length;\\n if (actionLen == 0) {\\n revert InvalidActions();\\n } else {\\n uint256 idx = 0;\\n do {\\n _executeOnSubAccount(_wallet, _subAccount, _strategy,\\n _actionExecs[idx]);\\n unchecked {\\n ++idx;\\n }\\n } while (idx < actionLen);\\n }\\n }\\n```\\n
Add a reentrancy guard for `executeAutomationViaBot()` and `executeTrustedAutomation()`.
null
```\\n function _executeAutomation( address _wallet, address _subAccount, address _strategy,\\n Types.Executable[] memory _actionExecs ) internal {\\n uint256 actionLen = _actionExecs.length;\\n if (actionLen == 0) {\\n revert InvalidActions();\\n } else {\\n uint256 idx = 0;\\n do {\\n _executeOnSubAccount(_wallet, _subAccount, _strategy,\\n _actionExecs[idx]);\\n unchecked {\\n ++idx;\\n }\\n } while (idx < actionLen);\\n }\\n }\\n```\\n
Anyone can make creating strategies extremely expensive for the user
medium
In Console architecture, users can deploy spare subaccounts (Gnosis Safes) so that when they will subscribe to a strategy most of the gas spending would have been spent at a low-gas phase.\\n```\\n function deploySpareSubAccount(address _wallet) external { address subAccount =\\n SafeDeployer(addressProvider.safeDeployer()).deploySubAccount(_wallet);\\n subAccountToWalletMap[subAccount] = _wallet; walletToSubAccountMap[_wallet].push(subAccount);\\n // No need to update subAccountStatus as it is already set to false\\n emit SubAccountAllocated(_wallet, subAccount);\\n }\\n```\\n\\nImpact The issue is that anyone can call the deploy function and specify another user's wallet. While on the surface that sounds like donating gas costs, in practice this functionality can make operating with strategies prohibitively expensive. When users will subscribe to strategies, the StrategyRegistry will request a subaccount using this function:\\n```\\n function requestSubAccount(address _wallet) external returns (address) {\\n if (msg.sender != subscriptionRegistry) \\n revert OnlySubscriptionRegistryCallable();\\n // Try to find a subAccount which already exists\\n address[] memory subAccountList = walletToSubAccountMap[_wallet];\\n```\\n\\nAt this point, the entire subaccount array will be copied from storage to memory. Therefore, attackers can fill the array with hundreds of elements at a low-gas time and make creation of strategies very difficult.
Limit the amount of spare subaccount to something reasonable, like 10 Team Response: Removing the spare subaccount deployment Mitigation review: Attack surface has been removed.
null
```\\n function deploySpareSubAccount(address _wallet) external { address subAccount =\\n SafeDeployer(addressProvider.safeDeployer()).deploySubAccount(_wallet);\\n subAccountToWalletMap[subAccount] = _wallet; walletToSubAccountMap[_wallet].push(subAccount);\\n // No need to update subAccountStatus as it is already set to false\\n emit SubAccountAllocated(_wallet, subAccount);\\n }\\n```\\n
DCA Strategies build orders that may not be executable, wasting fees
medium
In `_buildInitiateSwapExecutable()`, DCA strategies determine the swap parameters for the CoW Swap. The code has recently been refactored so that there may be more than one active order simultaneously. The issue is that the function assumes the user's entire ERC20 balance to be available for the order being built.\\n```\\n // Check if enough balance present to swap, else swap entire balance\\n uint256 amountIn = (inputTokenBalance < params.amountToSwap) ? \\n inputTokenBalance : params.amountToSwap;\\n```\\n\\nImpact This is a problem because if the previous order will be executed before the current order, there may not be enough funds to pull from the user to execute the swap. As a result, transaction execution fees are wasted.
Ensure only one swap can be in-flight at a time, or deduct the in-flight swap amounts from the current balance.
null
```\\n // Check if enough balance present to swap, else swap entire balance\\n uint256 amountIn = (inputTokenBalance < params.amountToSwap) ? \\n inputTokenBalance : params.amountToSwap;\\n```\\n
User will lose all Console functionality when upgrading their wallet and an upgrade target has not been set up
medium
Console supports upgrading of the manager wallet using the `upgradeWalletType()` function.\\n```\\n function upgradeWalletType() external {\\n if (!isWallet(msg.sender)) \\n revert WalletDoesntExist(msg.sender); uint8 fromWalletType = _walletDataMap[msg.sender].walletType;\\n _setWalletType(msg.sender, _upgradablePaths[fromWalletType]);\\n emit WalletUpgraded(msg.sender, fromWalletType,\\n _upgradablePaths[fromWalletType]);\\n }\\n```\\n\\nNote that upgradablePaths are set by governance. There is a lack of check that the upgradable path is defined before performing the upgrade.\\n```\\n function _setWalletType(address _wallet, uint8 _walletType) private {\\n _walletDataMap[_wallet].walletType = _walletType;\\n }\\n```\\n\\nIf _upgradablePaths[fromWalletType] is zero (uninitialized), the user's wallet type shall become zero too. However, zero is an invalid value, as defined by the isWallet() view function:\\n```\\n function isWallet(address _wallet) public view returns (bool) \\n { WalletData memory walletData = _walletDataMap[_wallet];\\n if (walletData.walletType == 0 || walletData.feeToken == address(0)){\\n return false;\\n }\\n return true;\\n }\\n```\\n\\nImpact As a result, most of the functionality of Console is permanently broken when users upgrade their wallet when an upgrade path isn't set. They can salvage their funds if it is a Safe account, as they can still execute on it directly.
When settings a new wallet type, make sure the new type is not zero.
null
```\\n function upgradeWalletType() external {\\n if (!isWallet(msg.sender)) \\n revert WalletDoesntExist(msg.sender); uint8 fromWalletType = _walletDataMap[msg.sender].walletType;\\n _setWalletType(msg.sender, _upgradablePaths[fromWalletType]);\\n emit WalletUpgraded(msg.sender, fromWalletType,\\n _upgradablePaths[fromWalletType]);\\n }\\n```\\n
Rounding error causes an additional iteration of DCA strategies
low
Both CoW strategies receive an interval and total amountIn of tokens to swap. They calculate the amount per iteration as below:\\n```\\n Types.TokenRequest[] memory tokens = new Types.TokenRequest[](1); \\n tokens[0] = Types.TokenRequest({token: inputToken, amount: amountIn});\\n amountIn = amountIn / iterations;\\n StrategyParams memory params = StrategyParams({ tokenIn: inputToken,\\n tokenOut: outputToken, amountToSwap: amountIn, interval: interval, remitToOwner: remitToOwner\\n });\\n```\\n
Change the amount requested from the management wallet to amountIn / iterations * iterations.
null
```\\n Types.TokenRequest[] memory tokens = new Types.TokenRequest[](1); \\n tokens[0] = Types.TokenRequest({token: inputToken, amount: amountIn});\\n amountIn = amountIn / iterations;\\n StrategyParams memory params = StrategyParams({ tokenIn: inputToken,\\n tokenOut: outputToken, amountToSwap: amountIn, interval: interval, remitToOwner: remitToOwner\\n });\\n```\\n
Fee mismatch between contracts can make strategies unusable
low
In CoW Swap strategies, fee is set in the strategy contracts and then passed to `initiateSwap()`. It is built in _buildInitiateSwapExecutable():\\n```\\n // Generate executable to initiate swap on DCACoWAutomation return Types.Executable({\\n callType: Types.CallType.DELEGATECALL, target: dcaCoWAutomation,\\n value: 0,\\n data: abi.encodeCall( DCACoWAutomation.initiateSwap,\\n (params.tokenIn, params.tokenOut, swapRecipient, amountIn, minAmountOut, swapFee)\\n )\\n });\\n```\\n\\nThere is a mismatch between the constraints around fees between the strategy contracts and the `initiateSwap()` function:\\n```\\n function setSwapFee(uint256 _swapFee) external {\\n _onlyGov();\\n if (_swapFee > 10_000) { revert InvalidSlippage();\\n }\\n swapFee = _swapFee;\\n }\\n if (feeBps > 0) {\\n if (feeBps > 1_000) revert FeeTooHigh();\\n amountIn = amountToSwap * (MAX_BPS - feeBps) / MAX_BPS;\\n```\\n
Enforce the same constraints on the fee percentage in both contracts, or remove the check from one of them as part of a simplified security model.
null
```\\n // Generate executable to initiate swap on DCACoWAutomation return Types.Executable({\\n callType: Types.CallType.DELEGATECALL, target: dcaCoWAutomation,\\n value: 0,\\n data: abi.encodeCall( DCACoWAutomation.initiateSwap,\\n (params.tokenIn, params.tokenOut, swapRecipient, amountIn, minAmountOut, swapFee)\\n )\\n });\\n```\\n
Reentrancy protection can likely be bypassed
high
The KeyManager offers reentrancy protection for interactions with the associated account. Through the LSP20 callbacks or through the `execute()` calls, it will call `_nonReentrantBefore()` before execution, and `_nonReentrantAfter()` post-execution. The latter will always reset the flag signaling entry.\\n```\\n function _nonReentrantAfter() internal virtual {\\n // By storing the original value once again, a refund is triggered \\n (see // https://eips.ethereum.org/EIPS/eip-2200)\\n _reentrancyStatus = false;\\n }\\n```\\n\\nAn attacker can abuse it to reenter provided that there exists some third-party contract with REENTRANCY_PERMISSION that performs some interaction with the contract. The attacker would trigger the third-party code path, which will clear the reentrancy status, and enable attacker to reenter. This could potentially be chained several times. Breaking the reentrancy assumption would make code that assumes such flows to be impossible to now be vulnerable.
In `_nonReentrantAfter()`, the flag should be returned to the original value before reentry, rather than always setting it to false.
null
```\\n function _nonReentrantAfter() internal virtual {\\n // By storing the original value once again, a refund is triggered \\n (see // https://eips.ethereum.org/EIPS/eip-2200)\\n _reentrancyStatus = false;\\n }\\n```\\n
LSP20 verification library deviates from spec and will accept fail values
medium
The functions `lsp20VerifyCall()` and `lsp20VerifyCallResult()` are called to validate the owner accepts some account interaction. The specification states they must return a specific 4 byte magic value. However, the implementation will accept any byte array that starts with the required magic value.\\n```\\n function _verifyCall(address logicVerifier) internal virtual returns (bool verifyAfter) {\\n (bool success, bytes memory returnedData) = logicVerifier.call(\\n abi.encodeWithSelector(ILSP20.lsp20VerifyCall.selector, msg.sender, msg.value, msg.data)\\n );\\n if (!success) _revert(false, returnedData);\\n if (returnedData.length < 32) revert \\n LSP20InvalidMagicValue(false, returnedData);\\n bytes32 magicValue = abi.decode(returnedData, (bytes32));\\n if (bytes3(magicValue) != \\n bytes3(ILSP20.lsp20VerifyCall.selector))\\n revert LSP20InvalidMagicValue(false, returnedData);\\n return bytes1(magicValue[3]) == 0x01 ? true : false;\\n }\\n```\\n\\nTherefore, implementations of the above functions which intend to signal failure status may be accepted by the verification wrapper above.
Verify that the return data length is 32 bytes (the 4 bytes are extended by the compiler), and that all other bytes are zero.
null
```\\n function _verifyCall(address logicVerifier) internal virtual returns (bool verifyAfter) {\\n (bool success, bytes memory returnedData) = logicVerifier.call(\\n abi.encodeWithSelector(ILSP20.lsp20VerifyCall.selector, msg.sender, msg.value, msg.data)\\n );\\n if (!success) _revert(false, returnedData);\\n if (returnedData.length < 32) revert \\n LSP20InvalidMagicValue(false, returnedData);\\n bytes32 magicValue = abi.decode(returnedData, (bytes32));\\n if (bytes3(magicValue) != \\n bytes3(ILSP20.lsp20VerifyCall.selector))\\n revert LSP20InvalidMagicValue(false, returnedData);\\n return bytes1(magicValue[3]) == 0x01 ? true : false;\\n }\\n```\\n
Deviation from spec will result in dislocation of receiver delegate
medium
The LSP0 `universalReceiver()` function looks up the receiver delegate by crafting a mapping key type.\\n```\\n bytes32 lsp1typeIdDelegateKey = LSP2Utils.generateMappingKey(\\n _LSP1_UNIVERSAL_RECEIVER_DELEGATE_PREFIX, bytes20(typeId));\\n```\\n\\nMapping keys are constructed of a 10-byte prefix, 2 zero bytes and a 20-byte suffix. However, followers of the specification will use an incorrect suffix. The docs do not discuss the trimming of bytes32 into a bytes20 type. The mismatch may cause various harmful scenarios when interacting with the delegate not using the reference implementation.
Document the trimming action in the LSP0 specification.
null
```\\n bytes32 lsp1typeIdDelegateKey = LSP2Utils.generateMappingKey(\\n _LSP1_UNIVERSAL_RECEIVER_DELEGATE_PREFIX, bytes20(typeId));\\n```\\n
KeyManager ERC165 does not support LSP20
medium
LSP6KeyManager supports LSP20 call verification. However, in `supportInterface()` it does not return the LSP20 interfaceId.\\n```\\n function supportsInterface(bytes4 interfaceId) public view virtual override returns (bool) {\\n return\\n interfaceId == _INTERFACEID_LSP6 || interfaceId == _INTERFACEID_ERC1271 ||\\n super.supportsInterface(interfaceId);\\n }\\n```\\n\\nAs a result, clients which correctly check for support of LSP20 methods will not operate with the KeyManager implementation.
Insert another supported interfaceId under `supportsInterface()`.
null
```\\n function supportsInterface(bytes4 interfaceId) public view virtual override returns (bool) {\\n return\\n interfaceId == _INTERFACEID_LSP6 || interfaceId == _INTERFACEID_ERC1271 ||\\n super.supportsInterface(interfaceId);\\n }\\n```\\n
LSP0 ownership functions deviate from specification and reject native tokens
low
The LSP specifications define the following functions for LSP0:\\n```\\n function transferOwnership(address newPendingOwner) external payable;\\n function renounceOwnership() external payable;\\n```\\n\\nHowever, their implementations are not payable.\\n```\\n function transferOwnership(address newOwner) public virtual\\n override(LSP14Ownable2Step, OwnableUnset)\\n {\\n```\\n\\n```\\n function renounceOwnership() public virtual override(LSP14Ownable2Step, OwnableUnset) {\\n address _owner = owner();\\n```\\n\\nThis may break interoperation between conforming and non-confirming contracts.
Remove the payable keyword in the specification for the above functions, or make the implementations payable
null
```\\n function transferOwnership(address newPendingOwner) external payable;\\n function renounceOwnership() external payable;\\n```\\n
Transfers of vaults from an invalid source are not treated correctly by receiver delegate
low
In the universalReceiver() function, if the notifying contract does not support LSP9, yet the typeID corresponds to an LSP9 transfer, the function will return instead of reverting.\\n```\\n if (\\n mapPrefix == _LSP10_VAULTS_MAP_KEY_PREFIX && notifier.code.length > 0 &&\\n !notifier.supportsERC165InterfaceUnchecked(_INTERFACEID_LSP9)\\n ) {\\n return "LSP1: not an LSP9Vault ownership transfer";\\n }\\n```\\n
Revert when dealing with transfers that cannot be valid.
null
```\\n if (\\n mapPrefix == _LSP10_VAULTS_MAP_KEY_PREFIX && notifier.code.length > 0 &&\\n !notifier.supportsERC165InterfaceUnchecked(_INTERFACEID_LSP9)\\n ) {\\n return "LSP1: not an LSP9Vault ownership transfer";\\n }\\n```\\n
Relayer can choose amount of gas for delivery of message
low
LSP6 supports relaying of calls using a supplied signature. The encoded message is defined as:\\n```\\n bytes memory encodedMessage = abi.encodePacked( LSP6_VERSION,\\n block.chainid,\\n nonce,\\n msgValue,\\n payload\\n );\\n```\\n\\nThe message doesn't include a gas parameter, which means the relayer can specify any gas amount. If the provided gas is insufficient, the entire transaction will revert. However, if the called contract exhibits different behavior depending on the supplied gas, a relayer (attacker) has control over that behavior.
Signed message should include the gas amount passed. Care should be taken to verify there is enough gas in the current state for the gas amount not to be truncated due to the 63/64 rule.
null
```\\n bytes memory encodedMessage = abi.encodePacked( LSP6_VERSION,\\n block.chainid,\\n nonce,\\n msgValue,\\n payload\\n );\\n```\\n
_calculateClaim() does not distribute boost emissions correctly
high
The function `_calculateClaim()` is responsible for the calculations of the amount of emissions a specific veSatin is entitled to claim. The idea is to distribute emissions only to veSatin tokens locked for more than minLockDurationForReward and only for the extra time the veSatin is locked for on top of minLockDurationForReward. As an example, if minLockDurationForReward is set to 6 months a veSatin locked for 7 months would receive emissions for 1 month and a veSatin locked for 5 months would receive no emissions at all. To do so the following code is executed in a loop, where every loop calculates the amount of emissions the veSatin accumulated during a specific week, in chronological order:\\n```\\n if ((lockEndTime - oldUserPoint.ts) > (minLockDurationForReward)) {\\n toDistribute +=\\n (balanceOf * tokensPerWeek[weekCursor]) / veSupply[weekCursor];\\n weekCursor += WEEK;\\n }\\n```\\n\\nThe code distributes the rewards if the elapsed time between lockEndTime (the locking end timestamp) and oldUserPoint.ts is bigger than minLockDurationForReward. However, oldUserPoint.ts is the timestamp of the last user action on a veSatin, for example depositing LP by calling `increaseAmount()`. As an example, a user that locks his veSatin and does nothing else will receive rewards for the whole locking duration. In contrast, a user that performs one action a week would only receive rewards for the locking duration minus minLockDurationForReward
The variable weekCursor should be used instead of oldUserPoint.ts in the if condition:\\n```\\n if ((lockEndTime - weekCursor) > (minLockDurationForReward)) {\\n```\\n
null
```\\n if ((lockEndTime - oldUserPoint.ts) > (minLockDurationForReward)) {\\n toDistribute +=\\n (balanceOf * tokensPerWeek[weekCursor]) / veSupply[weekCursor];\\n weekCursor += WEEK;\\n }\\n```\\n
Users will be unable to claim emissions from veSatin tokens if they withdraw it or merge it
high
The function `_calculateClaim()` uses the variable lockEndTime when checking if a veSatin is entitled to emissions for a particular week (code with mitigation from TRST-H-1):\\n```\\n if ((lockEndTime - weekCursor) > (minLockDurationForReward)) {\\n toDistribute +=\\n (balanceOf * tokensPerWeek[weekCursor]) / veSupply[weekCursor];\\n weekCursor += WEEK;\\n }\\n```\\n\\nHowever lockEndTime is set to 0 whenever a user withdraws a veSatin by calling `withdraw()` or merges one by calling `merge()`. When this is the case the operation lockEndTime - weekCursor underflows, thus reverting. This results in users being unable to claim veSatin emissions if they withdraw or merge it first
In the `withdraw()` and `merge()` functions, call `claim()` in VeDist.sol to claim emissions before setting the lock end timestamp to 0. In `merge()` this is only necessary for the veSatin passed as _from
null
```\\n if ((lockEndTime - weekCursor) > (minLockDurationForReward)) {\\n toDistribute +=\\n (balanceOf * tokensPerWeek[weekCursor]) / veSupply[weekCursor];\\n weekCursor += WEEK;\\n }\\n```\\n
It's never possible to vote for new pools until setMaxVotesForPool() is called
high
The function `_vote()` allows voting on a pool only when the current amount of votes plus the new votes is lower or equal to the value returned by _calculateMaxVotePossible():\\n```\\n require(_poolWeights <= _calculateMaxVotePossible(_pool), "Max votes exceeded");\\n```\\n\\nHowever, `_calculateMaxVotePossible()` returns 0 for every pool in which the variable maxVotesForPool has not been initialized, thus making `_vote()` revert:\\n```\\n return ((totalVotingPower * maxVotesForPool[_pool]) / 100);\\n```\\n
In `createGauge()` and `createGauge4Pool()` set maxVotesForPool for the pool the gauge is being created for to 100.
null
```\\n require(_poolWeights <= _calculateMaxVotePossible(_pool), "Max votes exceeded");\\n```\\n
The protocol might transfer extra SATIN emissions to veSatin holders potentially making SatinVoter.sol insolvent
high
The function `_distribute()` in SatinVoter.sol is generally responsible for distributing weekly emissions to a gauge based on the percentage of total votes the associated pool received. In particular, it's called by `updatePeriod()` (as per fix TRST-H-4) on the gauge associated with the Satin / $CASH pool. The variable veShare is set to be equal to the returned value of `calculateSatinCashLPVeShare()`, which is calculated as the percentage of Satin / $CASH LP times claimable[gauge] and represents the amount of SATIN that will be transferred to VeDist.sol when checkpointing emissions in checkpointEmissions():\\n```\\n uint _claimable = claimable[_gauge];\\n if (SATIN_CASH_LP_GAUGE == _gauge) {\\n veShare = calculateSatinCashLPVeShare(_claimable);\\n _claimable -= veShare;\\n }\\n if (_claimable > IMultiRewardsPool(_gauge).left(token) && _claimable / DURATION > 0) {\\n claimable[_gauge] = 0;\\n if (is4poolGauge[_gauge]) {\\n IGauge(_gauge).notifyRewardAmount(token, _claimable, true);\\n } else {\\n IGauge(_gauge).notifyRewardAmount(token, _claimable, false);\\n }\\n emit DistributeReward(msg.sender, _gauge, _claimable);\\n }\\n```\\n\\nHowever, when the if condition (_claimable > IMultiRewardsPool(_gauge).left(token) && _claimable / DURATION > 0) is false the variable claimable[_gauge] will not be set to 0, meaning the next time veShare will be calculated it will include emissions that have already been distributed, potentially making SatinVoter.sol insolvent
Adjust claimable[gauge] after calculating veShare and calculate veShare only if the msg.sender is SatinMinter.sol to prevent potential attackers from manipulating the value by repeatedly calling _distribute():\\n```\\n if (SATIN_CASH_LP_GAUGE == _gauge && msg.sender == minter) {\\n veShare = calculateSatinCashLPVeShare(_claimable);\\n claimable[_gauge] -= veShare;\\n _claimable -= veShare;\\n }\\n```\\n
null
```\\n uint _claimable = claimable[_gauge];\\n if (SATIN_CASH_LP_GAUGE == _gauge) {\\n veShare = calculateSatinCashLPVeShare(_claimable);\\n _claimable -= veShare;\\n }\\n if (_claimable > IMultiRewardsPool(_gauge).left(token) && _claimable / DURATION > 0) {\\n claimable[_gauge] = 0;\\n if (is4poolGauge[_gauge]) {\\n IGauge(_gauge).notifyRewardAmount(token, _claimable, true);\\n } else {\\n IGauge(_gauge).notifyRewardAmount(token, _claimable, false);\\n }\\n emit DistributeReward(msg.sender, _gauge, _claimable);\\n }\\n```\\n
It's possible to drain all the funds from ExternalBribe
high
The function `earned()` is used to calculate the amount rewards owed to a tokenId, to do so it performs a series operations over a loop and then it always executes:\\n```\\n Checkpoint memory cp = checkpoints[tokenId][_endIndex];\\n uint _lastEpochStart = _bribeStart(cp.timestamp);\\n uint _lastEpochEnd = _lastEpochStart + DURATION;\\n if (block.timestamp > _lastEpochEnd) {\\n reward += (cp.balanceOf * \\n tokenRewardsPerEpoch[token][_lastEpochStart]) /\\n supplyCheckpoints[getPriorSupplyIndex(_lastEpochEnd)].supply;\\n```\\n\\nwhich adds to reward the amount of rewards earned by the tokenId during the last epoch in which it was used to vote, but only if that happened at least a week prior (block.timestamp > _lastEpochEnd). Because of this, it's possible to call `earned()` multiple times in a row with a tokenId that voted more than a week before to drain the contract funds.
The function `earned()` is taken from the Velodrome protocol and is known to have issues. Because it uses the convoluted logic of looping over votes to calculate the rewards per epoch instead of looping over epochs, we recommend using the Velodrome fixed implementation, which we reviewed:\\n```\\n function earned(address token, uint tokenId) public view returns (uint) {\\n if (numCheckpoints[tokenId] == 0) {\\n return 0;\\n }\\n uint reward = 0;\\n uint _ts = 0;\\n uint _bal = 0;\\n uint _supply = 1;\\n uint _index = 0;\\n uint _currTs = _bribeStart(lastEarn[token][tokenId]); // take epoch last claimed in as starting point\\n _index = getPriorBalanceIndex(tokenId, _currTs);\\n _ts = checkpoints[tokenId][_index].timestamp;\\n _bal = checkpoints[tokenId][_index].balanceOf;\\n // accounts for case where lastEarn is before first checkpoint\\n _currTs = Math.max(_currTs, _bribeStart(_ts));\\n // get epochs between current epoch and first checkpoint in same epoch as last claim\\n uint numEpochs = (_bribeStart(block.timestamp) - _currTs) / DURATION;\\n if (numEpochs > 0) {\\n for (uint256 i = 0; i < numEpochs; i++) {\\n // get index of last checkpoint in this epoch\\n _index = getPriorBalanceIndex(tokenId, _currTs + DURATION);\\n // get checkpoint in this epoch\\n _ts = checkpoints[tokenId][_index].timestamp;\\n _bal = checkpoints[tokenId][_index].balanceOf;\\n // get supply of last checkpoint in this epoch\\n _supply = supplyCheckpoints[getPriorSupplyIndex(_currTs + DURATION)].supply;\\n reward += _bal * tokenRewardsPerEpoch[token][_currTs] / _supply;\\n _currTs += DURATION;\\n }\\n }\\n return reward;\\n }\\n```\\n
null
```\\n Checkpoint memory cp = checkpoints[tokenId][_endIndex];\\n uint _lastEpochStart = _bribeStart(cp.timestamp);\\n uint _lastEpochEnd = _lastEpochStart + DURATION;\\n if (block.timestamp > _lastEpochEnd) {\\n reward += (cp.balanceOf * \\n tokenRewardsPerEpoch[token][_lastEpochStart]) /\\n supplyCheckpoints[getPriorSupplyIndex(_lastEpochEnd)].supply;\\n```\\n
Division by 0 can freeze emissions claims for veSatin holders
medium
The function `_calculateClaim()` is responsible for the calculations of the amount of emissions a specific veSatin is entitled to claim. In doing so, this code is executed (code with mitigation from TRST-H-1):\\n```\\n if ((lockEndTime - weekCursor) > (minLockDurationForReward)) {\\n toDistribute +=\\n (balanceOf * tokensPerWeek[weekCursor]) / veSupply[weekCursor];\\n weekCursor += WEEK;\\n }\\n```\\n\\nThe variable veSupply[weekCursor] is used as a denominator without checking if it's 0, which could make the function revert. If the protocol ever reaches a state where veSupply[weekCursor] is 0, all the claims for veSatin that were locked during that week would fail for both past and future claims. The same issue is present in the function `_calculateEmissionsClaim()`
Ensure veSupply[weekCursor] is not 0 when performing the division.
null
```\\n if ((lockEndTime - weekCursor) > (minLockDurationForReward)) {\\n toDistribute +=\\n (balanceOf * tokensPerWeek[weekCursor]) / veSupply[weekCursor];\\n weekCursor += WEEK;\\n }\\n```\\n
BaseV1Pair could break because of overflow
medium
In the function _update(), called internally by `mint()`, `burn()` and `swap()`, the following code is executed:\\n```\\n uint256 timeElapsed = blockTimestamp - blockTimestampLast;\\n // overflow is desired\\n if (timeElapsed > 0 && _reserve0 != 0 && _reserve1 != 0) {\\n reserve0CumulativeLast += _reserve0 * timeElapsed;\\n reserve1CumulativeLast += _reserve1 * timeElapsed;\\n }\\n```\\n\\nThis is forked from UniswapV2 source code, and it's meant and known to overflow. It works fine if solidity < 0.8.0 is used but reverts when solidity >= 0.8.0 is used. If this happens all the core functionalities of the pool would break, including `mint()`, `burn()`, and `swap()`.
Wrap the operation around an unchecked{} block so that when the variable overflows it loops back to 0 instead of reverting.
null
```\\n uint256 timeElapsed = blockTimestamp - blockTimestampLast;\\n // overflow is desired\\n if (timeElapsed > 0 && _reserve0 != 0 && _reserve1 != 0) {\\n reserve0CumulativeLast += _reserve0 * timeElapsed;\\n reserve1CumulativeLast += _reserve1 * timeElapsed;\\n }\\n```\\n
createGauge4Pool() lacks proper checks and/or access control
medium
The function createGauge4Pool() can be called by anybody at any time and is used to create a Gauge for a special pool, the 4pool. It takes 5 parameters as inputs:\\n```\\n function createGauge4pool(\\n address _4pool,\\n address _dai,\\n address _usdc,\\n address _usdt,\\n address _cash\\n ) external returns (address) {\\n```\\n\\nNone of the parameters are properly sanitized, meaning _dai, _usdc, _usdt, _cash could be any whitelisted token and not necessarily DAI, USDC, USDT, and cash while _4pool could be any custom contract, including a malicious one. The function also sets the variable FOUR_POOL_GAUGE_ADDRESS to the newly created gauge, overwriting the previous value.
Make the function only callable by an admin, and if it can be called multiple times, turn the variable FOUR_POOL_GAUGE_ADDRESS to a mapping from address to boolean to support multiple 4 pools.
null
```\\n function createGauge4pool(\\n address _4pool,\\n address _dai,\\n address _usdc,\\n address _usdt,\\n address _cash\\n ) external returns (address) {\\n```\\n
The logic in _calculateClaim() can leave some tokens locked and waste gas
low
The function `_calculateClaim()` is responsible for the calculations of the amount of emissions a specific veSatin is entitled to claim. To do so, this code is executed in a loop for each week from the current timestamp to the last claim (code with mitigation from TRST-H-1):\\n```\\n if ((lockEndTime - weekCursor) > (minLockDurationForReward)) {\\n toDistribute +=\\n (balanceOf * tokensPerWeek[weekCursor]) / veSupply[weekCursor];\\n weekCursor += WEEK;\\n }\\n```\\n\\nWhen the if condition is not met two things happen:\\nAn amount of emissions that was supposed to be distributed ((balanceOf * tokensPerWeek[weekCursor]) / veSupply[weekCursor])) is skipped, meaning it will stay locked in the contract.\\nThe function `_calculateClaim()` will loop for the maximum number of times (50), because weekCursor is not increased, wasting users' gas.
When the if condition is not met burn the tokens that were supposed to be distributed and exit the loop. Since the non-distributed tokens would stay locked it's not strictly necessary to burn them.
null
```\\n if ((lockEndTime - weekCursor) > (minLockDurationForReward)) {\\n toDistribute +=\\n (balanceOf * tokensPerWeek[weekCursor]) / veSupply[weekCursor];\\n weekCursor += WEEK;\\n }\\n```\\n
More than one hat of the same hatId can be assigned to a user
high
Hats are minted internally using `_mintHat()`.\\n```\\n /// @notice Internal call to mint a Hat token to a wearer\\n /// @dev Unsafe if called when `_wearer` has a non-zero balance of `_hatId`\\n /// @param _wearer The wearer of the Hat and the recipient of the newly minted token\\n /// @param _hatId The id of the Hat to mint\\n function _mintHat(address _wearer, uint256 _hatId) internal {\\n unchecked {\\n // should not overflow since `mintHat` enforces max balance of 1\\n _balanceOf[_wearer][_hatId] = 1;\\n // increment Hat supply counter\\n // should not overflow given AllHatsWorn check in `mintHat` ++_hats[_hatId].supply;\\n }\\n emit TransferSingle(msg.sender, address(0), _wearer, _hatId, 1);\\n }\\n```\\n\\nAs documentation states, it is unsafe if _wearer already has the hatId. However, this could easily be the case when called from `mintHat()`.\\n```\\n function mintHat(uint256 _hatId, address _wearer) public returns (bool) {\\n Hat memory hat = _hats[_hatId];\\n if (hat.maxSupply == 0) revert HatDoesNotExist(_hatId);\\n // only the wearer of a hat's admin Hat can mint it\\n _checkAdmin(_hatId);\\n if (hat.supply >= hat.maxSupply) {\\n revert AllHatsWorn(_hatId);\\n }\\n if (isWearerOfHat(_wearer, _hatId)) {\\n revert AlreadyWearingHat(_wearer, _hatId);\\n }\\n _mintHat(_wearer, _hatId);\\n return true;\\n }\\n```\\n\\nThe function validates _wearer doesn't currently wear the hat, but its balance could still be over 0, if the hat is currently toggled off or the wearer is not eligible. The impact is that the hat supply is forever spent, while nobody actually received the hat. This could be used maliciously or occur by accident. When the hat is immutable, the max supply can never be corrected for this leak. It could be used to guarantee no additional, unfriendly hats can be minted to maintain permanent power.
Instead of checking if user currently wears the hat, check if its balance is over 0.
null
```\\n /// @notice Internal call to mint a Hat token to a wearer\\n /// @dev Unsafe if called when `_wearer` has a non-zero balance of `_hatId`\\n /// @param _wearer The wearer of the Hat and the recipient of the newly minted token\\n /// @param _hatId The id of the Hat to mint\\n function _mintHat(address _wearer, uint256 _hatId) internal {\\n unchecked {\\n // should not overflow since `mintHat` enforces max balance of 1\\n _balanceOf[_wearer][_hatId] = 1;\\n // increment Hat supply counter\\n // should not overflow given AllHatsWorn check in `mintHat` ++_hats[_hatId].supply;\\n }\\n emit TransferSingle(msg.sender, address(0), _wearer, _hatId, 1);\\n }\\n```\\n
TXs can be executed by less than the minimum required signatures
high
In HatsSignerGateBase, `checkTransaction()` is the function called by the Gnosis safe to approve the transaction. Several checks are in place.\\n```\\n uint256 safeOwnerCount = safe.getOwners().length;\\n if (safeOwnerCount < minThreshold) {\\n revert BelowMinThreshold(minThreshold, safeOwnerCount);\\n }\\n```\\n\\n```\\n uint256 validSigCount = countValidSignatures(txHash, signatures, signatures.length / 65);\\n // revert if there aren't enough valid signatures\\n if (validSigCount < safe.getThreshold()) {\\n revert InvalidSigners();\\n }\\n```\\n\\nThe first check is that the number of owners registered on the safe is at least minThreshold. The second check is that the number of valid signatures (wearers of relevant hats) is not below the safe's threshold. However, it turns out these requirements are not sufficient. A possible situation is that there are plenty of owners registered, but currently most do not wear a hat. `reconcileSignerCount()` could be called to reduce the safe's threshold to the current validSigCount, which can be below minThreshold. That would make both the first and second check succeed. However, minThreshold is defined to be the smallest number of signers that must come together to make a TX. The result is that a single signer could execute a TX on the safe, if the other signers are not wearers of hats (for example, their toggle has been temporarily set off in the case of multi-hat signer gate.
Add another check in `checkTransaction()`, which states that validSigCount >= minThreshold.
null
```\\n uint256 safeOwnerCount = safe.getOwners().length;\\n if (safeOwnerCount < minThreshold) {\\n revert BelowMinThreshold(minThreshold, safeOwnerCount);\\n }\\n```\\n
Target signature threshold can be bypassed leading to minority TXs
high
`checkTransaction()` is the enforcer of the HSG logic, making sure signers are wearers of hats and so on. The check below makes sure sufficient hat wearers signed the TX:\\n```\\n uint256 validSigCount = countValidSignatures(txHash, signatures, signatures.length / 65);\\n // revert if there aren't enough valid signatures\\n if (validSigCount < safe.getThreshold()) {\\n revert InvalidSigners();\\n }\\n```\\n\\nThe issue is that the safe's threshold is not guaranteed to be up to date. For example, initially there were 5 delegated signers. At some point, three lost eligibility. `reconcileSignerCount()` is called to update the safe's threshold to now have 2 signers. At a later point, the three signers which lost eligibility regained it. At this point, the threshold is still two, but there are 5 valid signers, so if targetThreshold is not below 5, they should all sign for a TX to be executed. That is not the case, as the old threshold is used. There are various scenarios which surface the lack of synchronization between the wearer status and safe's stored threshold.
Call `reconcileSignerCount()` before the validation code in `checkTransaction()`.
null
```\\n uint256 validSigCount = countValidSignatures(txHash, signatures, signatures.length / 65);\\n // revert if there aren't enough valid signatures\\n if (validSigCount < safe.getThreshold()) {\\n revert InvalidSigners();\\n }\\n```\\n
maxSigners can be bypassed
high
maxSigners is specified when creating an HSG and is left constant. It is enforced in two ways -targetThreshold may never be set above it, and new signers cannot register to the HSG when the signer count reached maxSigners. Below is the implementation code in HatsSignerGate.\\n```\\n function claimSigner() public virtual {\\n if (signerCount == maxSigners) {\\n revert MaxSignersReached();\\n }\\n if (safe.isOwner(msg.sender)) {\\n revert SignerAlreadyClaimed(msg.sender);\\n }\\n if (!isValidSigner(msg.sender)) {\\n revert NotSignerHatWearer(msg.sender);\\n }\\n _grantSigner(msg.sender);\\n }\\n```\\n\\nAn issue that arises is that this doesn't actually limit the number of registered signers. Indeed, signerCount is a variable that can fluctuate when wearers lose eligibility or a hat is inactive. At this point, `reconcileSignerCount()` can be called to update the signerCount to the current valid wearer count. A simple attack which achieves unlimited claims is as follows:\\nAssume maxSigners = 10\\n10 signers claim their spot, so signerCount is maxed out\\nA signer misbehaves, loses eligibility and the hat.\\nreconcile() is called, so signerCount is updated to 9\\nA new signer claims, making signerCount = 10\\nThe malicious signer behaves nicely and regains the hat.\\nreconcile() is called again, making signerCount = 11\\nAt this point, any eligible hat wearer can claim their hat, easily overrunning the maxSigners restriction.
The root cause is that users which registered but lose their hat are still stored in the safe's owners array, meaning they can always get re-introduced and bump the signerCount. Instead of checking the signerCount, a better idea would be to compare with the list of owners saved on the safe. If there are owners that are no longer holders, `removeSigner()` can be called to vacate space for new signers.
null
```\\n function claimSigner() public virtual {\\n if (signerCount == maxSigners) {\\n revert MaxSignersReached();\\n }\\n if (safe.isOwner(msg.sender)) {\\n revert SignerAlreadyClaimed(msg.sender);\\n }\\n if (!isValidSigner(msg.sender)) {\\n revert NotSignerHatWearer(msg.sender);\\n }\\n _grantSigner(msg.sender);\\n }\\n```\\n
Attacker can DOS minting of new top hats in low-fee chains
medium
In Hats protocol, anyone can be assigned a top hat via the `mintTopHat()` function. The top hats are structured with top 32 bits acting as a domain ID, and the lower 224 bits are cleared. There are therefore up to 2^32 = ~ 4 billion top hats. Once they are all consumed, `mintTopHat()` will always fail:\\n```\\n // uint32 lastTopHatId will overflow in brackets\\n topHatId = uint256(++lastTopHatId) << 224;\\n```\\n\\nThis behavior exposes the project to a DOS vector, where an attacker can mint 4 billion top hats in a loop and make the function unusable, forcing a redeploy of Hats protocol. This is unrealistic on ETH mainnet due to gas consumption, but definitely achievable on the cheaper L2 networks. As the project will be deployed on a large variety of EVM blockchains, this poses a significant risk.
Require a non-refundable deposit fee (paid in native token) when minting a top hat. Price it so that consuming the 32-bit space will be impossible. This can also serve as a revenue stream for the Hats project.
null
```\\n // uint32 lastTopHatId will overflow in brackets\\n topHatId = uint256(++lastTopHatId) << 224;\\n```\\n
Linking of hat trees can freeze hat operations
medium
Hats support tree-linking, where hats from one node link to the first level of a different domain. This way, the amount of levels for the linked-to tree increases by the linked-from level count. This is generally fine, however lack of checking of the new total level introduces severe risks.\\n```\\n /// @notice Identifies the level a given hat in its hat tree\\n /// @param _hatId the id of the hat in question\\n /// @return level (0 to type(uint8).max)\\n function getHatLevel(uint256 _hatId) public view returns (uint8) {\\n```\\n\\nThe `getHatLevel()` function can only return up to level 255. It is used by the `checkAdmin()` call used in many of the critical functions in the Hats contract. Therefore, if for example, 17 hat domains are joined together in the most stretched way possible, It would result in a correct hat level of 271, making this calculation revert:\\n```\\n if (treeAdmin != 0) {\\n return 1 + uint8(i) + getHatLevel(treeAdmin);\\n }\\n```\\n\\nThe impact is that intentional or accidental linking that creates too many levels would freeze the higher hat levels from any interaction with the contract.
It is recommended to add a check in `_linkTopHatToTree()`, that the new accumulated level can fit in uint8. Another option would be to change the maximum level type to uint32.
null
```\\n /// @notice Identifies the level a given hat in its hat tree\\n /// @param _hatId the id of the hat in question\\n /// @return level (0 to type(uint8).max)\\n function getHatLevel(uint256 _hatId) public view returns (uint8) {\\n```\\n
Attacker can make a signer gate creation fail
medium
DAOs can deploy a HSG using `deployHatsSignerGateAndSafe()` or deployMultiHatsSignerGateAndSafe().The parameters are encoded and passed to moduleProxyFactory.deployModule():\\n```\\n bytes memory initializeParams = abi.encode(_ownerHatId, _signersHatId, _safe, hatsAddress, _minThreshold, \\n _targetThreshold, _maxSigners, version );\\n hsg = moduleProxyFactory.deployModule(hatsSignerGateSingleton, abi.encodeWithSignature("setUp(bytes)", \\n initializeParams), _saltNonce );\\n```\\n\\nThis function will call createProxy():\\n```\\n proxy = createProxy( masterCopy, keccak256(abi.encodePacked(keccak256(initializer), saltNonce)) );\\n```\\n\\nThe second parameter is the generated salt, which is created from the initializer and passed saltNonce. Finally `createProxy()` will use CREATE2 to create the contract:\\n```\\n function createProxy(address target, bytes32 salt) internal returns (address result)\\n {\\n if (address(target) == address(0)) revert ZeroAddress(target);\\n if (address(target).code.length == 0) revert \\n TargetHasNoCode(target);\\n bytes memory deployment = abi.encodePacked(\\n hex"602d8060093d393df3363d3d373d3d3d363d73", target, hex"5af43d82803e903d91602b57fd5bf3" );\\n // solhint-disable-next-line no-inline-assembly\\n assembly {\\n result := create2(0, add(deployment, 0x20), \\n mload(deployment), salt)\\n }\\n if (result == address(0)) revert TakenAddress(result);\\n }\\n```\\n\\nAn issue could be that an attacker can frontrun the creation TX with their own creation request, with the same parameters. This would create the exact address created by the CREATE2 call, since the parameters and therefore the final salt will be the same. When the victim's transaction would be executed, the address is non-empty so the EVM would reject its creation. This would result in a bad UX for a user, who thinks the creation did not succeed. The result contract would still be usable, but would be hard to track as it was created in another TX.
Use an ever-increasing nonce counter to guarantee unique contract addresses.
null
```\\n bytes memory initializeParams = abi.encode(_ownerHatId, _signersHatId, _safe, hatsAddress, _minThreshold, \\n _targetThreshold, _maxSigners, version );\\n hsg = moduleProxyFactory.deployModule(hatsSignerGateSingleton, abi.encodeWithSignature("setUp(bytes)", \\n initializeParams), _saltNonce );\\n```\\n
Signers can backdoor the safe to execute any transaction in the future without consensus
medium
The function `checkAfterExecution()` is called by the safe after signer's request TX was executed (and authorized). It mainly checks that the linkage between the safe and the HSG has not been compromised.\\n```\\n function checkAfterExecution(bytes32, bool) external override {\\n if (abi.decode(StorageAccessible(address(safe)).getStorageAt(uint256(GUARD_STORAGE_SLOT), 1), (address))\\n != address(this)) \\n {\\n revert CannotDisableThisGuard(address(this));\\n }\\n if (!IAvatar(address(safe)).isModuleEnabled(address(this))) {\\n revert CannotDisableProtectedModules(address(this));\\n }\\n if (safe.getThreshold() != _correctThreshold()) {\\n revert SignersCannotChangeThreshold();\\n }\\n // leave checked to catch underflows triggered by re-erntry\\n attempts\\n --guardEntries;\\n }\\n```\\n\\nHowever, it is missing a check that no new modules have been introduced to the safe. When modules execute TXs on a Gnosis safe, the guard safety callbacks do not get called. As a result, any new module introduced is free to execute whatever it wishes on the safe. It constitutes a serious backdoor threat and undermines the HSG security model.
Check that no new modules have been introduced to the safe, using the `getModulesPaginated()` utility.
null
```\\n function checkAfterExecution(bytes32, bool) external override {\\n if (abi.decode(StorageAccessible(address(safe)).getStorageAt(uint256(GUARD_STORAGE_SLOT), 1), (address))\\n != address(this)) \\n {\\n revert CannotDisableThisGuard(address(this));\\n }\\n if (!IAvatar(address(safe)).isModuleEnabled(address(this))) {\\n revert CannotDisableProtectedModules(address(this));\\n }\\n if (safe.getThreshold() != _correctThreshold()) {\\n revert SignersCannotChangeThreshold();\\n }\\n // leave checked to catch underflows triggered by re-erntry\\n attempts\\n --guardEntries;\\n }\\n```\\n
createHat does not detect MAX_LEVEL admin correctly
low
In `createHat()`, the contract checks user is not minting hats for the lowest hat tier:\\n```\\n function createHat( uint256 _admin, string memory _details, uint32 _maxSupply, address _eligibility,\\n address _toggle, bool _mutable, string memory _imageURI) \\n public returns (uint256 newHatId) {\\n if (uint8(_admin) > 0) {\\n revert MaxLevelsReached();\\n }\\n ….\\n }\\n```\\n\\nThe issue is that it does not check for max level correctly, as it looks only at the lowest 8 bits. Each level is composed of 16 bits, so ID xx00 would pass this check. Fortunately, although the check is passed, the function will revert later. The call to `getNextId(_admin)` will return 0 for max-level admin, and _checkAdmin(0) is guaranteed to fail. However, the check should still be fixed as it is not exploitable only by chance.
Change the conversion to uint16.
null
```\\n function createHat( uint256 _admin, string memory _details, uint32 _maxSupply, address _eligibility,\\n address _toggle, bool _mutable, string memory _imageURI) \\n public returns (uint256 newHatId) {\\n if (uint8(_admin) > 0) {\\n revert MaxLevelsReached();\\n }\\n ….\\n }\\n```\\n
Incorrect imageURI is returned for hats in certain cases
low
Function `getImageURIForHat()` should return the most relevant imageURI for the requested hatId. It will iterate backwards from the current level down to level 0, and return an image if it exists for that level.\\n```\\n function getImageURIForHat(uint256 _hatId) public view returns (string memory) {\\n // check _hatId first to potentially avoid the `getHatLevel` call\\n Hat memory hat = _hats[_hatId];\\n string memory imageURI = hat.imageURI; // save 1 SLOAD\\n // if _hatId has an imageURI, we return it\\n if (bytes(imageURI).length > 0) {\\n return imageURI;\\n }\\n // otherwise, we check its branch of admins\\n uint256 level = getHatLevel(_hatId);\\n // but first we check if _hatId is a tophat, in which case we fall back to the global image uri\\n if (level == 0) return baseImageURI;\\n // otherwise, we check each of its admins for a valid imageURI\\n uint256 id;\\n // already checked at `level` above, so we start the loop at `level - 1`\\n for (uint256 i = level - 1; i > 0;) {\\n id = getAdminAtLevel(_hatId, uint8(i));\\n hat = _hats[id];\\n imageURI = hat.imageURI;\\n if (bytes(imageURI).length > 0) {\\n return imageURI;\\n }\\n // should not underflow given stopping condition is > 0\\n unchecked {\\n --i;\\n }\\n }\\n // if none of _hatId's admins has an imageURI of its own, we \\n again fall back to the global image uri\\n return baseImageURI;\\n }\\n```\\n\\nIt can be observed that the loop body will not run for level 0. When the loop is finished, the code just returns the baseImageURI, which is a Hats-level fallback, rather than top hat level fallback. As a result, the image displayed will not be correct when querying for a level above 0, when all levels except level 0 have no registered image.
Before returning the baseImageURI, check if level 0 admin has a registered image.
null
```\\n function getImageURIForHat(uint256 _hatId) public view returns (string memory) {\\n // check _hatId first to potentially avoid the `getHatLevel` call\\n Hat memory hat = _hats[_hatId];\\n string memory imageURI = hat.imageURI; // save 1 SLOAD\\n // if _hatId has an imageURI, we return it\\n if (bytes(imageURI).length > 0) {\\n return imageURI;\\n }\\n // otherwise, we check its branch of admins\\n uint256 level = getHatLevel(_hatId);\\n // but first we check if _hatId is a tophat, in which case we fall back to the global image uri\\n if (level == 0) return baseImageURI;\\n // otherwise, we check each of its admins for a valid imageURI\\n uint256 id;\\n // already checked at `level` above, so we start the loop at `level - 1`\\n for (uint256 i = level - 1; i > 0;) {\\n id = getAdminAtLevel(_hatId, uint8(i));\\n hat = _hats[id];\\n imageURI = hat.imageURI;\\n if (bytes(imageURI).length > 0) {\\n return imageURI;\\n }\\n // should not underflow given stopping condition is > 0\\n unchecked {\\n --i;\\n }\\n }\\n // if none of _hatId's admins has an imageURI of its own, we \\n again fall back to the global image uri\\n return baseImageURI;\\n }\\n```\\n
Fetching of hat status may fail due to lack of input sanitization
low
The functions `_isActive()` and `_isEligible()` are used by `balanceOf()` and other functions, so they should not ever revert. However, they perform ABI decoding from external inputs.\\n```\\n function _isActive(Hat memory _hat, uint256 _hatId) internal view returns (bool) {\\n bytes memory data = \\n abi.encodeWithSignature("getHatStatus(uint256)", _hatId);\\n (bool success, bytes memory returndata) = \\n _hat.toggle.staticcall(data);\\n if (success && returndata.length > 0) {\\n return abi.decode(returndata, (bool));\\n } else {\\n return _getHatStatus(_hat);\\n }\\n }\\n```\\n\\nIf toggle returns invalid return data (whether malicious or by accident), `abi.decode()` would revert causing the entire function to revert.
Wrap the decoding operation for both affected functions in a try/catch statement. Fall back to the `_getHatStatus()` result if necessary. Checking that returndata size is correct is not enough as bool encoding must be 64-bit encoded 0 or 1.
null
```\\n function _isActive(Hat memory _hat, uint256 _hatId) internal view returns (bool) {\\n bytes memory data = \\n abi.encodeWithSignature("getHatStatus(uint256)", _hatId);\\n (bool success, bytes memory returndata) = \\n _hat.toggle.staticcall(data);\\n if (success && returndata.length > 0) {\\n return abi.decode(returndata, (bool));\\n } else {\\n return _getHatStatus(_hat);\\n }\\n }\\n```\\n
Attacker can take over GMXAdapter implementation contract
low
GMXAdapter inherits from BaseExchangeAdapter. It is an implementation contract for a transparent proxy and has the following initializer:\\n```\\n function initialize() external initializer {\\n __Ownable_init();\\n }\\n```\\n\\nTherefore, an attacker can call initialize() on the implementation contract and become the owner. At this point they can do just about anything to this contract, but it has no impact on the proxy as it is using separate storage. If there was a delegatecall coded in GMXAdapter, attacker could have used it to call an attacker's contract and execute the SELFDESTRUCT opcode, killing the implementation. With no implementation, the proxy itself would not be functional until it is updated to a new implementation. It is ill-advised to allow anyone to have control over implementation contracts as future upgrades may make the attack surface exploitable.
The standard approach is to call from the constructor the _disableInitializers() from Open Zeppelin's Initializable module
null
```\\n function initialize() external initializer {\\n __Ownable_init();\\n }\\n```\\n
disordered fee calculated causes collateral changes to be inaccurate
high
`_increasePosition()` changes the Hedger's GMX position by sizeDelta amount and collateralDelta collateral. There are two collateralDelta corrections - one for swap fees and one for position fees. Since the swap fee depends on up-to-date collateralDelta, it's important to calculate it after the position fee, contrary to the current state. In practice, it may lead to the leverage ratio being higher than intended as collateralDelta sent to GMX is lower than it should be.\\n```\\n if (isLong) {\\n uint swapFeeBP = getSwapFeeBP(isLong, true, collateralDelta);\\n collateralDelta = (collateralDelta * (BASIS_POINTS_DIVISOR + swapFeeBP)) / BASIS_POINTS_DIVISOR;\\n }\\n // add margin fee\\n // when we increase position, fee always got deducted from collateral\\n collateralDelta += _getPositionFee(currentPos.size, sizeDelta, currentPos.entryFundingRate);\\n```\\n
Flip the order of `getSwapFeeBP()` and `_getPositionFee()`.
null
```\\n if (isLong) {\\n uint swapFeeBP = getSwapFeeBP(isLong, true, collateralDelta);\\n collateralDelta = (collateralDelta * (BASIS_POINTS_DIVISOR + swapFeeBP)) / BASIS_POINTS_DIVISOR;\\n }\\n // add margin fee\\n // when we increase position, fee always got deducted from collateral\\n collateralDelta += _getPositionFee(currentPos.size, sizeDelta, currentPos.entryFundingRate);\\n```\\n
small LP providers may be unable to withdraw their deposits
medium
In LiquidityPool's initiateWithdraw(), it's required that withdrawn value is above a minimum parameter, or that withdrawn tokens is above the minimum parameter.\\n```\\n if (withdrawalValue < lpParams.minDepositWithdraw && \\n amountLiquidityToken < lpParams.minDepositWithdraw) {\\n revert MinimumWithdrawNotMet(address(this), withdrawalValue, lpParams.minDepositWithdraw);\\n }\\n```\\n\\nThe issue is that minDepositWithdraw is measured in dollars while amountLiquidityToken is LP tokens. The intention was that if LP tokens lost value and a previous deposit is now worth less than minDepositWithdraw, it would still be withdrawable. However, the current implementation doesn't check for that correctly, since the LP to dollar exchange rate at deposit time is not known, and is practically being hardcoded as 1:1 here. The impact is that users may not be able to withdraw LP with the token amount that was above the minimum at deposit time, or vice versa
Consider calculating an average exchange rate at which users have minted and use it to verify withdrawal amount is satisfactory.
null
```\\n if (withdrawalValue < lpParams.minDepositWithdraw && \\n amountLiquidityToken < lpParams.minDepositWithdraw) {\\n revert MinimumWithdrawNotMet(address(this), withdrawalValue, lpParams.minDepositWithdraw);\\n }\\n```\\n
base to quote swaps trust GMX-provided minPrice and maxPrice to be correct, which may be manipulated
medium
exchangeFromExactBase() in GMXAdapter converts an amount of base to quote. It implements slippage protection by using the GMX vault's getMinPrice() and getMaxPrice() utilities. However, such protection is insufficient because GMX prices may be manipulated. Indeed, GMX supports “AMM pricing” mode where quotes are calculated from Uniswap reserves. A possible attack would be to drive up the base token (e.g. ETH) price, sell a large ETH amount to the GMXAdapter, and repay the flashloan used for manipulation. exchangeFromExactBase() is attacker-reachable from LiquidityPool's exchangeBase().\\n```\\n uint tokenInPrice = _getMinPrice(address(baseAsset));\\n uint tokenOutPrice = _getMaxPrice(address(quoteAsset));\\n // rest of code\\n uint minOut = tokenInPrice\\n .multiplyDecimal(marketPricingParams[_optionMarket].minReturnPercent)\\n .multiplyDecimal(_amountBase)\\n .divideDecimal(tokenOutPrice);\\n```\\n
Verify `getMinPrice()`, `getMinPrice()` outputs are close to Chainlink-provided prices as done in `getSpotPriceForMarket()`.
null
```\\n uint tokenInPrice = _getMinPrice(address(baseAsset));\\n uint tokenOutPrice = _getMaxPrice(address(quoteAsset));\\n // rest of code\\n uint minOut = tokenInPrice\\n .multiplyDecimal(marketPricingParams[_optionMarket].minReturnPercent)\\n .multiplyDecimal(_amountBase)\\n .divideDecimal(tokenOutPrice);\\n```\\n
recoverFunds() does not handle popular ERC20 tokens like BNB
medium
recoverFunds() is used for recovery in case of mistakenly-sent tokens. However, it uses unsafe transfer to send tokens back, which will not support 100s of non-compatible ERC20 tokens. Therefore it is likely unsupported tokens will be unrecoverable.\\n```\\n if (token == quoteAsset || token == baseAsset || token == weth) {\\n revert CannotRecoverRestrictedToken(address(this));\\n }\\n token.transfer(recipient, token.balanceOf(address(this)));\\n```\\n
Use Open Zeppelin's SafeERC20 encapsulation of ERC20 transfer functions.
null
```\\n if (token == quoteAsset || token == baseAsset || token == weth) {\\n revert CannotRecoverRestrictedToken(address(this));\\n }\\n token.transfer(recipient, token.balanceOf(address(this)));\\n```\\n
setPositionRouter leaks approval to previous positionRouter
low
positionRouter is used to change GMX positions in GMXFuturesPoolHedger. It can be replaced by a new router if GMX redeploys, for example if a bug is found or the previous one is hacked. The new positionRouter receives approval from the contract. However, approval to the previous positionRouter is not revoked.\\n```\\n function setPositionRouter(IPositionRouter _positionRouter) external onlyOwner {\\n positionRouter = _positionRouter;\\n router.approvePlugin(address(positionRouter));\\n emit PositionRouterSet(_positionRouter);\\n }\\n```\\n\\nA number of unlikely, yet dire scenarios could occur.
Use router.denyPlugin() to remove privileges from the previous positionRouter.
null
```\\n function setPositionRouter(IPositionRouter _positionRouter) external onlyOwner {\\n positionRouter = _positionRouter;\\n router.approvePlugin(address(positionRouter));\\n emit PositionRouterSet(_positionRouter);\\n }\\n```\\n
PoolHedger can receive ETH directly from anyone
low
A `receive()` function has been added to GMXFuturesPoolHedger, so that it is able to receive ETH from GMX as request refunds. However, it is not advisable to have an open `receive()` function if it is not necessary. Users may wrongly send ETH directly to PoolHedger and lose it forever.\\n```\\n receive() external payable {}\\n```\\n
Add a msg.sender check in the receive() function, and make sure sender is positionRouter.
null
```\\n receive() external payable {}\\n```\\n
Attacker can freeze profit withdrawals from V3 vaults
high
Users of Ninja can use Vault's `withdrawProfit()` to withdraw profits. It starts with the following check:\\n```\\n if (block.timestamp <= lastProfitTime) {\\n revert NYProfitTakingVault__ProfitTimeOutOfBounds();\\n }\\n```\\n\\nIf attacker can front-run user's `withdrawProfit()` TX and set lastProfitTime to block.timestamp, they would effectively freeze the user's yield. That is indeed possible using the Vault paired strategy's `harvest()` function. It is permissionless and calls `_harvestCore()`. The attack path is shown in bold.\\n```\\n function harvest() external override whenNotPaused returns (uint256 callerFee) {\\n require(lastHarvestTimestamp != block.timestamp);\\n uint256 harvestSeconds = lastHarvestTimestamp > 0 ? block.timestamp \\n - lastHarvestTimestamp : 0;\\n lastHarvestTimestamp = block.timestamp;\\n uint256 sentToVault;\\n uint256 underlyingTokenCount;\\n (callerFee, underlyingTokenCount, sentToVault) = _harvestCore();\\n emit StrategyHarvest(msg.sender, underlyingTokenCount, \\n harvestSeconds, sentToVault);\\n }\\n```\\n\\n```\\n function _harvestCore() internal override returns (uint256 callerFee, uint256 underlyingTokenCount, uint256 sentToVault)\\n {\\n IMasterChef(SPOOKY_SWAP_FARM_V2).deposit(POOL_ID, 0);\\n _swapFarmEmissionTokens();\\n callerFee = _chargeFees();\\n underlyingTokenCount = balanceOf();\\n sentToVault = _sendYieldToVault();\\n } \\n```\\n\\n```\\n function _sendYieldToVault() internal returns (uint256 sentToVault) {\\n sentToVault = IERC20Upgradeable(USDC).balanceOf(address(this));\\n if (sentToVault > 0) {\\n IERC20Upgradeable(USDC).approve(vault, sentToVault);\\n IVault(vault).depositProfitTokenForUsers(sentToVault);\\n }\\n }\\n```\\n\\n```\\n function depositProfitTokenForUsers(uint256 _amount) external nonReentrant {\\n if (_amount == 0) {\\n revert NYProfitTakingVault__ZeroAmount();\\n }\\n if (block.timestamp <= lastProfitTime) {\\n revert NYProfitTakingVault__ProfitTimeOutOfBounds();\\n }\\n if (msg.sender != strategy) {\\n revert NYProfitTakingVault__OnlyStrategy();\\n }\\n uint256 totalShares = totalSupply();\\n if (totalShares == 0) {\\n lastProfitTime = block.timestamp;\\n return;\\n }\\n accProfitTokenPerShare += ((_amount * PROFIT_TOKEN_PER_SHARE_PRECISION) / totalShares);\\n lastProfitTime = block.timestamp;\\n // Now pull in the tokens (Should have permission)\\n // We only want to pull the tokens with accounting\\n profitToken.transferFrom(strategy, address(this), _amount);\\n emit ProfitReceivedFromStrategy(_amount);\\n }\\n```\\n
Do not prevent profit withdrawals during lastProfitTime block.
null
```\\n if (block.timestamp <= lastProfitTime) {\\n revert NYProfitTakingVault__ProfitTimeOutOfBounds();\\n }\\n```\\n
Lack of child rewarder reserves could lead to freeze of funds
high
In ComplexRewarder.sol, `onReward()` is used to distribute rewards for previous time period, using the complex rewarder and any child rewarders. If the complex rewarder does not have enough tokens to hand out the reward, it correctly stores the rewards owed in storage. However, child rewarded will attempt to hand out the reward and may revert:\\n```\\n function onReward(uint _pid, address _user, address _to, uint, uint _amt) external override onlyParent nonReentrant {\\n PoolInfo memory pool = updatePool(_pid);\\n if (pool.lastRewardTime == 0) return;\\n UserInfo storage user = userInfo[_pid][_user];\\n uint pending;\\n if (user.amount > 0) {\\n pending = ((user.amount * pool.accRewardPerShare) / ACC_TOKEN_PRECISION) - user.rewardDebt;\\n rewardToken.safeTransfer(_to, pending);\\n }\\n user.amount = _amt;\\n user.rewardDebt = (_amt * pool.accRewardPerShare) / \\n ACC_TOKEN_PRECISION;\\n emit LogOnReward(_user, _pid, pending, _to);\\n }\\n```\\n\\nImportantly, if the child rewarder fails, the parent's `onReward()` reverts too:\\n```\\n uint len = childrenRewarders.length();\\n for (uint i = 0; i < len; ) {\\n IRewarder(childrenRewarders.at(i)).onReward(_pid, _user, _to, 0, \\n _amt);\\n unchecked {\\n ++i;\\n }\\n }\\n```\\n\\nIn the worst-case scenario, this will lead the user's `withdraw()` call to V3 Vault, to revert.
Introduce sufficient exception handling in the CompexRewarder.sol contract, so that `onReward()` would never fail.
null
```\\n function onReward(uint _pid, address _user, address _to, uint, uint _amt) external override onlyParent nonReentrant {\\n PoolInfo memory pool = updatePool(_pid);\\n if (pool.lastRewardTime == 0) return;\\n UserInfo storage user = userInfo[_pid][_user];\\n uint pending;\\n if (user.amount > 0) {\\n pending = ((user.amount * pool.accRewardPerShare) / ACC_TOKEN_PRECISION) - user.rewardDebt;\\n rewardToken.safeTransfer(_to, pending);\\n }\\n user.amount = _amt;\\n user.rewardDebt = (_amt * pool.accRewardPerShare) / \\n ACC_TOKEN_PRECISION;\\n emit LogOnReward(_user, _pid, pending, _to);\\n }\\n```\\n
Wrong accounting of user's holdings allows theft of reward
high
In `deposit()`, `withdraw()` and `withdrawProfit()`, `rewarder.onReward()` is called for reward bookkeeping. It will transfer previous eligible rewards and update the current amount user has:\\n```\\n user.amount = _amt;\\n user.rewardDebt = (_amt * pool.accRewardPerShare) / ACC_TOKEN_PRECISION;\\n user.rewardsOwed = rewardsOwed;\\n```\\n\\nIn `withdraw()`, there is a critical issue where `onReward()` is called too early:\\n```\\n // Update rewarder for this user\\n if (address(rewarder) != address(0)) {\\n rewarder.onReward(0, msg.sender, msg.sender, pending, user.amount);\\n }\\n // Burn baby burn\\n _burn(msg.sender, _shares);\\n // User accounting\\n uint256 userAmount = balanceOf(msg.sender);\\n // - Underlying (Frontend ONLY)\\n if (userAmount == 0) {\\n user.amount = 0;\\n } else {\\n user.amount -= r;\\n }\\n```\\n\\nThe new _amt which will be stored in reward contract's user.amount is vault's user.amount, before decrementing the withdrawn amount. Therefore, the withdrawn amount is still gaining rewards even though it's no longer in the contract. Effectively it is stealing the rewards of others, leading to reward insolvency. In order to exploit this flaw, attacker will deposit a larger amount and immediately withdraw it, except for one wei. When they would like to receive the rewards accrued for others, they will withdraw the remaining wei, which will trigger `onReward()`, which will calculate and send pending awards for the previously withdrawn amount.
Move the `onReward()` call to after user.amount is updated.
null
```\\n user.amount = _amt;\\n user.rewardDebt = (_amt * pool.accRewardPerShare) / ACC_TOKEN_PRECISION;\\n user.rewardsOwed = rewardsOwed;\\n```\\n
Unsafe transferFrom breaks compatibility with 100s of ERC20 tokens
medium
In Ninja vaults, the delegated strategy sends profit tokens to the vault using `depositProfitTokenForUsers()`. The vault transfers the tokens in using:\\n```\\n // Now pull in the tokens (Should have permission)\\n // We only want to pull the tokens with accounting\\n profitToken.transferFrom(strategy, address(this), _amount);\\n emit ProfitReceivedFromStrategy(_amount);\\n```\\n\\nThe issue is that the code doesn't use the `safeTransferFrom()` utility from SafeERC20. Therefore, profitTokens that don't return a bool in `transferFrom()` will cause a revert which means they are stuck in the strategy. Examples of such tokens are USDT, BNB, among hundreds of other tokens.
Use `safeTransferFrom()` from SafeERC20.sol
null
```\\n // Now pull in the tokens (Should have permission)\\n // We only want to pull the tokens with accounting\\n profitToken.transferFrom(strategy, address(this), _amount);\\n emit ProfitReceivedFromStrategy(_amount);\\n```\\n
Attacker can force partial withdrawals to fail
medium
In Ninja vaults, users call `withdraw()` to take back their deposited tokens. There is bookkeeping on remaining amount:\\n```\\n uint256 userAmount = balanceOf(msg.sender);\\n // - Underlying (Frontend ONLY)\\n if (userAmount == 0) {\\n user.amount = 0;\\n } else {\\n user.amount -= r;\\n }\\n```\\n\\nIf the withdraw is partial (some tokens are left), user.amount is decremented by r.\\n```\\n uint256 r = (balance() * _shares) / totalSupply();\\n```\\n\\nAbove, r is calculated as the relative share of the user's _shares of the total balance kept in the vault.\\nWe can see that user.amount is incremented in deposit().\\n```\\n function deposit(uint256 _amount) public nonReentrant {\\n …\\n user.amount += _amount;\\n …\\n }\\n```\\n\\nThe issue is that the calculated r can be more than _amount , causing an overflow in `withdraw()` and freezing the withdrawal. All attacker needs to do is send a tiny amount of underlying token directly to the contract, to make the shares go out of sync.
Redesign user structure, taking into account that balance of underlying can be externally manipulated
null
```\\n uint256 userAmount = balanceOf(msg.sender);\\n // - Underlying (Frontend ONLY)\\n if (userAmount == 0) {\\n user.amount = 0;\\n } else {\\n user.amount -= r;\\n }\\n```\\n
Rewards may be stuck due to unchangeable slippage parameter
medium
In NyPtvFantomWftmBooSpookyV2StrategyToUsdc.sol, MAX_SLIPPAGE is used to limit slippage in trades of BOO tokens to USDC, for yield:\\n```\\n function _swapFarmEmissionTokens() internal { IERC20Upgradeable boo = IERC20Upgradeable(BOO);\\n uint256 booBalance = boo.balanceOf(address(this));\\n if (booToUsdcPath.length < 2 || booBalance == 0) {\\n return;\\n }\\n boo.safeIncreaseAllowance(SPOOKY_ROUTER, booBalance);\\n uint256[] memory amounts = \\n IUniswapV2Router02(SPOOKY_ROUTER).getAmountsOut(booBalance, booToUsdcPath);\\n uint256 amountOutMin = (amounts[amounts.length - 1] * MAX_SLIPPAGE) / PERCENT_DIVISOR;\\n IUniswapV2Router02(SPOOKY_ROUTER).swapExactTokensForTokensSupportingFeeOnTransferTokens( booBalance, amountOutMin, booToUsdcPath, address(this), block.timestamp );\\n }\\n```\\n\\nIf slippage is not satisfied the entire transaction reverts. Since MAX_SLIPPAGE is constant, it is possible that harvesting of the strategy will be stuck, due to operations leading to too high of a slippage. For example, strategy might accumulate a large amount of BOO, or `harvest()` can be sandwich-attacked.
Allow admin to set slippage after some timelock period.
null
```\\n function _swapFarmEmissionTokens() internal { IERC20Upgradeable boo = IERC20Upgradeable(BOO);\\n uint256 booBalance = boo.balanceOf(address(this));\\n if (booToUsdcPath.length < 2 || booBalance == 0) {\\n return;\\n }\\n boo.safeIncreaseAllowance(SPOOKY_ROUTER, booBalance);\\n uint256[] memory amounts = \\n IUniswapV2Router02(SPOOKY_ROUTER).getAmountsOut(booBalance, booToUsdcPath);\\n uint256 amountOutMin = (amounts[amounts.length - 1] * MAX_SLIPPAGE) / PERCENT_DIVISOR;\\n IUniswapV2Router02(SPOOKY_ROUTER).swapExactTokensForTokensSupportingFeeOnTransferTokens( booBalance, amountOutMin, booToUsdcPath, address(this), block.timestamp );\\n }\\n```\\n
potential overflow in reward accumulator may freeze functionality
medium
Note the above description of `updatePool()` functionality. We can see that accRewardPerShare is only allocated 128 bits in PoolInfo:\\n```\\n struct PoolInfo {\\n uint128 accRewardPerShare;\\n uint64 lastRewardTime;\\n uint64 allocPoint;\\n```\\n\\nTherefore, even if truncation issues do not occur, it is likely that continuous incrementation of the counter would cause accRewardPerShare to overflow, which would freeze vault functionalities such as withdrawal.
Steal 32 bits from lastRewardTime and 32 bits from allocPoint to make the accumulator have 192 bits, which should be enough for safe calculations.
null
```\\n struct PoolInfo {\\n uint128 accRewardPerShare;\\n uint64 lastRewardTime;\\n uint64 allocPoint;\\n```\\n
when using fee-on-transfer tokens in VaultV3, capacity is limited below underlyingCap
low
Vault V3 documentation states it accounts properly for fee-on-transfer tokens. It calculates actual transferred amount as below:\\n```\\n uint256 _pool = balance();\\n if (_pool + _amount > underlyingCap) {\\n revert NYProfitTakingVault__UnderlyingCapReached(underlyingCap);\\n }\\n uint256 _before = underlying.balanceOf(address(this));\\n underlying.safeTransferFrom(msg.sender, address(this), _amount);\\n uint256 _after = underlying.balanceOf(address(this));\\n _amount = _after - _before;\\n```\\n\\nA small issue is that underlyingCap is compared to the _amount before correction for actual transferred amount. Therefore, it cannot actually be reached, and limits the maximum capacity of the vault to underlyingCap minus a factor of the fee %.
Move the underlyingCap check to below the effective _amount calculation
null
```\\n uint256 _pool = balance();\\n if (_pool + _amount > underlyingCap) {\\n revert NYProfitTakingVault__UnderlyingCapReached(underlyingCap);\\n }\\n uint256 _before = underlying.balanceOf(address(this));\\n underlying.safeTransferFrom(msg.sender, address(this), _amount);\\n uint256 _after = underlying.balanceOf(address(this));\\n _amount = _after - _before;\\n```\\n
Redundant checks in Vault V3
low
`depositProfitTokenForUsers()` and `withdrawProfit()` contain the following check:\\n```\\n if (block.timestamp <= lastProfitTime) {\\n revert NYProfitTakingVault__ProfitTimeOutOfBounds();\\n }\\n```\\n\\nHowever, lastProfitTime is only ever set to block.timestamp. Therefore, it can never be larger than block.timestamp.
It would be best in terms of gas costs and logical clarity to change the comparison to !=
null
```\\n if (block.timestamp <= lastProfitTime) {\\n revert NYProfitTakingVault__ProfitTimeOutOfBounds();\\n }\\n```\\n
createUniswapRangeOrder() charges manager instead of pool
high
_createUniswapRangeOrder() can be called either from manager flow, with createUniswapRangeOrder(), or pool-induced from hedgeDelta(). The issue is that the function assumes the sender is the parentLiquidityPool, for example:\\n```\\n if (inversed && balance < amountDesired) {\\n // collat = 0\\n uint256 transferAmount = amountDesired - balance;\\n uint256 parentPoolBalance = \\n ILiquidityPool(parentLiquidityPool).getBalance(address(token0));\\n if (parentPoolBalance < transferAmount) { revert \\n CustomErrors.WithdrawExceedsLiquidity(); \\n }\\n SafeTransferLib.safeTransferFrom(address(token0), msg.sender, \\n address(this), transferAmount);\\n } \\n```\\n\\nBalance check is done on pool, but money is transferred from sender. It will cause the order to use manager's funds.\\n```\\n function createUniswapRangeOrder(\\n RangeOrderParams calldata params,\\n uint256 amountDesired\\n ) external {\\n require(!_inActivePosition(), "RangeOrder: active position");\\n _onlyManager();\\n bool inversed = collateralAsset == address(token0);\\n _createUniswapRangeOrder(params, amountDesired, inversed);\\n }\\n```\\n
Ensure safeTransfer from uses parentLiquidityPool as source.
null
```\\n if (inversed && balance < amountDesired) {\\n // collat = 0\\n uint256 transferAmount = amountDesired - balance;\\n uint256 parentPoolBalance = \\n ILiquidityPool(parentLiquidityPool).getBalance(address(token0));\\n if (parentPoolBalance < transferAmount) { revert \\n CustomErrors.WithdrawExceedsLiquidity(); \\n }\\n SafeTransferLib.safeTransferFrom(address(token0), msg.sender, \\n address(this), transferAmount);\\n } \\n```\\n
hedgeDelta() priceToUse is calculated wrong, which causes bad hedges
high
When _delta parameter is negative for `hedgeDelta()`, priceToUse will be the minimum between quotePrice and underlyingPrice.\\n```\\n // buy wETH\\n // lowest price is best price when buying\\n uint256 priceToUse = quotePrice < underlyingPrice ? quotePrice : \\n underlyingPrice;\\n RangeOrderDirection direction = inversed ? RangeOrderDirection.ABOVE \\n : RangeOrderDirection.BELOW;\\n RangeOrderParams memory rangeOrder = \\n _getTicksAndMeanPriceFromWei(priceToUse, direction);\\n```\\n\\nThis works fine when direction is BELOW, because the calculated lowerTick and upperTick from _getTicksAndMeanPriceFromWei are guaranteed to be lower than current price.\\n```\\n int24 lowerTick = direction == RangeOrderDirection.ABOVE ? \\n nearestTick + tickSpacing : nearestTick - (2 * tickSpacing);\\n int24 tickUpper = direction ==RangeOrderDirection.ABOVE ? lowerTick + \\n tickSpacing : nearestTick - tickSpacing;\\n```\\n\\nTherefore, the fulfill condition is not true and we mint from the correct base. However, when direction is ABOVE, it is possible that the oracle supplied price (underlyingPrice) is low enough in comparison to pool price, that the fulfill condition is already active. In that case, the contract tries to mint from the wrong asset which will cause the wrong tokens to be sent in. In effect, the contract is not hedging. A similar situation occurs when _delta parameter is greater than zero.
Verify the calculated priceToUse is on the same side as pool-calculated tick price.
null
```\\n // buy wETH\\n // lowest price is best price when buying\\n uint256 priceToUse = quotePrice < underlyingPrice ? quotePrice : \\n underlyingPrice;\\n RangeOrderDirection direction = inversed ? RangeOrderDirection.ABOVE \\n : RangeOrderDirection.BELOW;\\n RangeOrderParams memory rangeOrder = \\n _getTicksAndMeanPriceFromWei(priceToUse, direction);\\n```\\n
multiplication overflow in getPoolPrice() likely
medium
`getPoolPrice()` is used in hedgeDelta to get the price directly from Uniswap v3 pool:\\n```\\n function getPoolPrice() public view returns (uint256 price, uint256 \\n inversed){\\n (uint160 sqrtPriceX96, , , , , , ) = pool.slot0();\\n uint256 p = uint256(sqrtPriceX96) * uint256(sqrtPriceX96) * (10 \\n ** token0.decimals());\\n // token0/token1 in 1e18 format\\n price = p / (2 ** 192);\\n inversed = 1e36 / price;\\n }\\n```\\n\\nThe issue is that calculation of p is likely to overflow. sqrtPriceX96 has 96 bits for decimals, 10** `token0.decimals()` will have 60 bits when decimals is 18, therefore there is only (256 - 2 * 96 - 60) / 2 = 2 bits for non-decimal part of sqrtPriceX96.
Consider converting the sqrtPrice to a 60x18 format and performing arithmetic operations using the PRBMathUD60x18 library.
null
```\\n function getPoolPrice() public view returns (uint256 price, uint256 \\n inversed){\\n (uint160 sqrtPriceX96, , , , , , ) = pool.slot0();\\n uint256 p = uint256(sqrtPriceX96) * uint256(sqrtPriceX96) * (10 \\n ** token0.decimals());\\n // token0/token1 in 1e18 format\\n price = p / (2 ** 192);\\n inversed = 1e36 / price;\\n }\\n```\\n
Hedging won't work if token1.decimals() < token0.decimals()
medium
`tickToToken0PriceInverted()` performs some arithmetic calculations. It's called by `_getTicksAndMeanPriceFromWei()`, which is called by `hedgeDelta()`. This line can overflow:\\n```\\n uint256 intermediate = inWei.div(10**(token1.decimals() -\\n token0.decimals()));\\n```\\n\\nAlso, this line would revert even if the above calculation was done correctly:\\n```\\n meanPrice = OptionsCompute.convertFromDecimals(meanPrice, \\n token0.decimals(), token1.decimals());\\n```\\n\\n```\\n function convertFromDecimals(uint256 value, uint8 decimalsA, uint8 decimalsB) internal pure\\n returns (uint256) {\\n if (decimalsA > decimalsB) {\\n revert();\\n }\\n …\\n```\\n\\nThe impact is that when `token1.decimals()` < `token0.decimals()`, the contract's main function is unusable.
Refactor the calculation to support different decimals combinations. Additionally, add more comprehensive tests to detect similar issues in the future.
null
```\\n uint256 intermediate = inWei.div(10**(token1.decimals() -\\n token0.decimals()));\\n```\\n
Overflow danger in _sqrtPriceX96ToUint
medium
_sqrtPriceX96ToUint will only work when the non-fractional component of sqrtPriceX96 takes up to 32 bits. This represents a price ratio of 18446744073709551616. With different token digits it is not unlikely that this ratio will be crossed which will make hedgeDelta() revert.\\n```\\n function _sqrtPriceX96ToUint(uint160 sqrtPriceX96) private pure returns (uint256)\\n {\\n uint256 numerator1 = uint256(sqrtPriceX96) * \\n uint256(sqrtPriceX96);\\n return FullMath.mulDiv(numerator1, 1, 1 << 192);\\n }\\n```\\n
Perform the multiplication after converting the numbers to 60x18 variables
null
```\\n function _sqrtPriceX96ToUint(uint160 sqrtPriceX96) private pure returns (uint256)\\n {\\n uint256 numerator1 = uint256(sqrtPriceX96) * \\n uint256(sqrtPriceX96);\\n return FullMath.mulDiv(numerator1, 1, 1 << 192);\\n }\\n```\\n
Insufficient dust checks
low
In `hedgeDelta()`, there is a dust check in the case of sell wETH order:\\n```\\n // sell wETH\\n uint256 wethBalance = inversed ? amount1Current : amount0Current;\\n if (wethBalance < minAmount) return 0;\\n```\\n\\nHowever, the actual used amount is _delta\\n```\\n uint256 deltaToUse = _delta > int256(wethBalance) ? wethBalance : \\n uint256(_delta);\\n _createUniswapRangeOrder(rangeOrder, deltaToUse, inversed);\\n```\\n\\nThe check should be applied on deltaToUse rather than wethBalance because it will be the minimum of wethBalance and _delta. Additionally, there is no corresponding check for minting with collateral in case _delta is negative.
Correct current dust checks and add them also in the if clause.
null
```\\n // sell wETH\\n uint256 wethBalance = inversed ? amount1Current : amount0Current;\\n if (wethBalance < minAmount) return 0;\\n```\\n
Linear vesting users may not receive vested amount
high
TokenTransmuter supports two types of transmutations, linear and instant. In linear, allocated amount is released across time until fully vested, while in instant the entire amount is released immediately. transmuteLinear() checks that there is enough output tokens left in the contract before accepting transfer of input tokens.\\n```\\n require(IERC20(outputTokenAddress).balanceOf(address(this)) >= \\n (totalAllocatedOutputToken - totalReleasedOutputToken), \\n "INSUFFICIENT_OUTPUT_TOKEN");\\n IERC20(inputTokenAddress).transferFrom(msg.sender, address(0), \\n _inputTokenAmount);\\n```\\n\\nHowever, `transmuteInstant()` lacks any remaining balance checks, and will operate as long as the function has enough output tokens to satisfy the request.\\n```\\n IERC20(inputTokenAddress).transferFrom(msg.sender, address(0), \\n _inputTokenAmount);\\n SafeERC20.safeTransfer(IERC20(outputTokenAddress), msg.sender, \\n allocation);\\n emit OutputTokenInstantReleased(msg.sender, allocation, \\n outputTokenAddress);\\n```\\n\\nAs a result, it is not ensured that tokens that have been reserved for linear distribution will be available when users request to claim them. An attacker may empty the output balance with a large instant transmute and steal future vested tokens of users.
In transmuteInstant, add a check similar to the one in transmuteLinear. It will ensure allocations are kept faithfully.
null
```\\n require(IERC20(outputTokenAddress).balanceOf(address(this)) >= \\n (totalAllocatedOutputToken - totalReleasedOutputToken), \\n "INSUFFICIENT_OUTPUT_TOKEN");\\n IERC20(inputTokenAddress).transferFrom(msg.sender, address(0), \\n _inputTokenAmount);\\n```\\n
Multiplier implementation causes limited functionality
low
linearMultiplier and instantMultiplier are used to calculate output token amount from input token amount in transmute functions.\\n```\\n uint256 allocation = (_inputTokenAmount * linearMultiplier) / \\n tokenDecimalDivider;\\n …\\n uint256 allocation = (_inputTokenAmount * instantMultiplier) / \\n tokenDecimalDivider;\\n```\\n\\nThe issue is that they are uint256 variables and can only multiply _inputTokenAmount, not divide it. It results in limited functionality of the protocol as vesting pairs where output tokens are valued more than input tokens cannot be used.
Add a boolean state variable which will describe whether to multiply or divide by the multiplier.
null
```\\n uint256 allocation = (_inputTokenAmount * linearMultiplier) / \\n tokenDecimalDivider;\\n …\\n uint256 allocation = (_inputTokenAmount * instantMultiplier) / \\n tokenDecimalDivider;\\n```\\n
Empty orders do not request from oracle and during settlement they use an invalid oracle version with `price=0` which messes up a lot of fees and funding accounting leading to loss of funds for the makers
high
When `market.update` which doesn't change user's position is called, a new (current) global order is created, but the oracle version is not requested due to empty order. This means that during the order settlement, it will use non-existant invalid oracle version with `price = 0`. This price is then used to accumulate all the data in this invalid `Version`, meaning accounting is done using `price = 0`, which is totally incorrect. For instance, all funding and fees calculations multiply by oracle version's price, thus all time periods between empty order and the next valid oracle version will not accumulate any fees, which is funds usually lost by makers (as makers won't receive fees/funding for the risk they take).\\nWhen `market.update` is called, it requests a new oracle version at the current order's timestamp unless the order is empty:\\n```\\n// request version\\nif (!newOrder.isEmpty()) oracle.request(IMarket(this), account);\\n```\\n\\nThe order is empty when it doesn't modify user position:\\n```\\nfunction isEmpty(Order memory self) internal pure returns (bool) {\\n return pos(self).isZero() && neg(self).isZero();\\n}\\n\\nfunction pos(Order memory self) internal pure returns (UFixed6) {\\n return self.makerPos.add(self.longPos).add(self.shortPos);\\n}\\n\\nfunction neg(Order memory self) internal pure returns (UFixed6) {\\n return self.makerNeg.add(self.longNeg).add(self.shortNeg);\\n}\\n```\\n\\nLater, when a valid oracle version is commited, during the settlement process, oracle version at the position is used:\\n```\\nfunction _processOrderGlobal(\\n Context memory context,\\n SettlementContext memory settlementContext,\\n uint256 newOrderId,\\n Order memory newOrder\\n) private {\\n // @audit no oracle version at this timestamp, thus it's invalid with `price=0`\\n OracleVersion memory oracleVersion = oracle.at(newOrder.timestamp); \\n\\n context.pending.global.sub(newOrder);\\n // @audit order is invalidated (it's already empty anyway), but the `price=0` is still used everywhere\\n if (!oracleVersion.valid) newOrder.invalidate();\\n\\n VersionAccumulationResult memory accumulationResult;\\n (settlementContext.latestVersion, context.global, accumulationResult) = VersionLib.accumulate(\\n settlementContext.latestVersion,\\n context.global,\\n context.latestPosition.global,\\n newOrder,\\n settlementContext.orderOracleVersion,\\n oracleVersion, // @audit <<< when oracleVersion is invalid, the `price=0` will still be used here\\n context.marketParameter,\\n context.riskParameter\\n );\\n// rest of code\\n```\\n\\nIf the oracle version is invalid, the order is invalidated, but the `price=0` is still used to accumulate. It doesn't affect pnl from price move, because the final oracle version is always valid, thus the correct price is used to evaluate all possible account actions, however it does affect accumulated fees and funding:\\n```\\nfunction _accumulateLinearFee(\\n Version memory next,\\n AccumulationContext memory context,\\n VersionAccumulationResult memory result\\n) private pure {\\n (UFixed6 makerLinearFee, UFixed6 makerSubtractiveFee) = _accumulateSubtractiveFee(\\n context.riskParameter.makerFee.linear(\\n Fixed6Lib.from(context.order.makerTotal()),\\n context.toOracleVersion.price.abs() // @audit <<< price == 0 for invalid oracle version\\n ),\\n context.order.makerTotal(),\\n context.order.makerReferral,\\n next.makerLinearFee\\n );\\n// rest of code\\n // Compute long-short funding rate\\n Fixed6 funding = context.global.pAccumulator.accumulate(\\n context.riskParameter.pController,\\n toSkew.unsafeDiv(Fixed6Lib.from(context.riskParameter.takerFee.scale)).min(Fixed6Lib.ONE).max(Fixed6Lib.NEG_ONE),\\n context.fromOracleVersion.timestamp,\\n context.toOracleVersion.timestamp,\\n context.fromPosition.takerSocialized().mul(context.fromOracleVersion.price.abs()) // @audit <<< price == 0 for invalid oracle version\\n );\\n// rest of code\\nfunction _accumulateInterest(\\n Version memory next,\\n AccumulationContext memory context\\n) private pure returns (Fixed6 interestMaker, Fixed6 interestLong, Fixed6 interestShort, UFixed6 interestFee) {\\n // @audit price = 0 and notional = 0 for invalid oracle version\\n UFixed6 notional = context.fromPosition.long.add(context.fromPosition.short).min(context.fromPosition.maker).mul(context.fromOracleVersion.price.abs());\\n// rest of code\\n```\\n\\nAs can be seen, all funding and fees accumulations multiply by oracle version's price (which is 0), thus during these time intervals fees and funding are 0.\\nThis will happen by itself during any period when there are no orders, because oracle provider's settlement callback uses `market.update` with empty order to settle user account, thus any non-empty order is always followed by an empty order for the next version and `price = 0` will be used to settle it until the next non-empty order:\\n```\\nfunction _settle(IMarket market, address account) private {\\n market.update(account, UFixed6Lib.MAX, UFixed6Lib.MAX, UFixed6Lib.MAX, Fixed6Lib.ZERO, false);\\n}\\n```\\n\\nThe scenario above is demonstrated in the test, add this to test/unit/market/Market.test.ts:\\n```\\nit('no fees accumulation due to invalid version with price = 0', async () => {\\n\\nfunction setupOracle(price: string, timestamp : number, nextTimestamp : number) {\\n const oracleVersion = {\\n price: parse6decimal(price),\\n timestamp: timestamp,\\n valid: true,\\n }\\n oracle.at.whenCalledWith(oracleVersion.timestamp).returns(oracleVersion)\\n oracle.status.returns([oracleVersion, nextTimestamp])\\n oracle.request.returns()\\n}\\n\\nfunction setupOracleAt(price: string, valid : boolean, timestamp : number) {\\n const oracleVersion = {\\n price: parse6decimal(price),\\n timestamp: timestamp,\\n valid: valid,\\n }\\n oracle.at.whenCalledWith(oracleVersion.timestamp).returns(oracleVersion)\\n}\\n\\nconst riskParameter = { // rest of code(await market.riskParameter()) }\\nconst riskParameterMakerFee = { // rest of coderiskParameter.makerFee }\\nriskParameterMakerFee.linearFee = parse6decimal('0.005')\\nriskParameterMakerFee.proportionalFee = parse6decimal('0.0025')\\nriskParameterMakerFee.adiabaticFee = parse6decimal('0.01')\\nriskParameter.makerFee = riskParameterMakerFee\\nconst riskParameterTakerFee = { // rest of coderiskParameter.takerFee }\\nriskParameterTakerFee.linearFee = parse6decimal('0.005')\\nriskParameterTakerFee.proportionalFee = parse6decimal('0.0025')\\nriskParameterTakerFee.adiabaticFee = parse6decimal('0.01')\\nriskParameter.takerFee = riskParameterTakerFee\\nawait market.connect(owner).updateRiskParameter(riskParameter)\\n\\ndsu.transferFrom.whenCalledWith(user.address, market.address, COLLATERAL.mul(1e12)).returns(true)\\ndsu.transferFrom.whenCalledWith(userB.address, market.address, COLLATERAL.mul(1e12)).returns(true)\\n\\nsetupOracle('100', TIMESTAMP, TIMESTAMP + 100);\\n\\nawait market\\n .connect(user)\\n ['update(address,uint256,uint256,uint256,int256,bool)'](user.address, POSITION, 0, 0, COLLATERAL, false);\\nawait market\\n .connect(userB)\\n ['update(address,uint256,uint256,uint256,int256,bool)'](userB.address, 0, POSITION, 0, COLLATERAL, false);\\n\\nsetupOracle('100', TIMESTAMP + 100, TIMESTAMP + 200);\\nawait market\\n .connect(user)\\n ['update(address,uint256,uint256,uint256,int256,bool)'](user.address, POSITION, 0, 0, 0, false);\\n\\n// oracle is commited at timestamp+200\\nsetupOracle('100', TIMESTAMP + 200, TIMESTAMP + 300);\\nawait market\\n .connect(user)\\n ['update(address,uint256,uint256,uint256,int256,bool)'](user.address, POSITION, 0, 0, 0, false);\\n\\n// oracle is not commited at timestamp+300\\nsetupOracle('100', TIMESTAMP + 200, TIMESTAMP + 400);\\nsetupOracleAt('0', false, TIMESTAMP + 300);\\nawait market\\n .connect(user)\\n ['update(address,uint256,uint256,uint256,int256,bool)'](user.address, POSITION, 0, 0, 0, false);\\n\\n// settle to see makerValue at all versions\\nsetupOracle('100', TIMESTAMP + 400, TIMESTAMP + 500);\\n\\nawait market.settle(user.address);\\nawait market.settle(userB.address);\\n\\nvar ver = await market.versions(TIMESTAMP + 200);\\nconsole.log("version 200: longValue: " + ver.longValue + " makerValue: " + ver.makerValue);\\nvar ver = await market.versions(TIMESTAMP + 300);\\nconsole.log("version 300: longValue: " + ver.longValue + " makerValue: " + ver.makerValue);\\nvar ver = await market.versions(TIMESTAMP + 400);\\nconsole.log("version 400: longValue: " + ver.longValue + " makerValue: " + ver.makerValue);\\n})\\n```\\n\\nConsole log:\\n```\\nversion 200: longValue: -318 makerValue: 285\\nversion 300: longValue: -100000637 makerValue: 100500571\\nversion 400: longValue: -637 makerValue: 571\\n```\\n\\nNotice, that fees are accumulated between versions 200 and 300, version 300 has huge pnl (because it's evaluated at price = 0), which then returns to normal at version 400, but no fees are accumulated between version 300 and 400 due to version 300 having `price = 0`.
Keep the price from the previous valid oracle version and use it instead of oracle version's one if oracle version's price == 0.
All fees and funding are incorrectly calculated as 0 during any period when there are no non-empty orders (which will be substantially more than 50% of the time, more like 90% of the time). Since most fees and funding are received by makers as a compensation for their price risk, this means makers will lose all these under-calculated fees and will receive a lot less fees and funding than expected.
```\\n// request version\\nif (!newOrder.isEmpty()) oracle.request(IMarket(this), account);\\n```\\n
Vault global shares and assets change will mismatch local shares and assets change during settlement due to incorrect `_withoutSettlementFeeGlobal` formula
high
Every vault update, which involves change of position in the underlying markets, `settlementFee` is charged by the Market. Since many users can deposit and redeem during the same oracle version, this `settlementFee` is shared equally between all users of the same oracle version. However, there is an issue in that `settlementFee` is charged once both for deposits and redeems, however `_withoutSettlementFeeGlobal` subtracts `settlementFee` in full both for deposits and redeems, meaning that for global fee, it's basically subtracted twice (once for deposits, and another time for redeems). But for local fee, it's subtracted proportional to `checkpoint.orders`, with sum of fee subtracted equal to exactly `settlementFee` (once). This difference in global and local `settlementFee` calculations leads to inflated `shares` and `assets` added for user deposits (local state) compared to vault overall (global state).\\nHere is an easy scenario to demonstrate the issue:\\nSettlementFee = `$10`\\nUser1 deposits `$10` for oracle version `t = 100`\\nUser2 redeems `10 shares` (worth $10) for the same oracle version `t = 100` (checkpoint.orders = 2)\\nOnce the oracle version `t = 100` settles, we have the following: 4.1. Global deposits = `$10`, redeems = `$10` 4.2. Global deposits convert to `0 shares` (because `_withoutSettlementFeeGlobal(10)` applies `settlementFee` of `$10` in full, returning 10-10=0) 4.3. Global redeems convert to `0 assets` (because `_withoutSettlementFeeGlobal(10)` applies `settlementFee` of `$10` in full, returning 10-10=0) 4.4. User1 deposit of `$10` converts to `5 shares` (because `_withoutSettlementFeeLocal(10)` applies `settlementFee` of `$5` (because there are 2 orders), returning 10-5=5) 4.5. User2 redeem of `10 shares` converts to `$5` (for the same reason)\\nFrom the example above it can be seen that:\\nUser1 receives 5 shares, but global vault shares didn't increase. Over time this difference will keep growing potentially leading to a situation where many user redeems will lead to 0 global shares, but many users will still have local shares which they will be unable to redeem due to underflow, thus losing funds.\\nUser2's assets which he can claim increase by $5, but global claimable assets didn't change, meaning User2 will be unable to claim assets due to underflow when trying to decrease global assets, leading to loss of funds for User2.\\nThe underflow in both cases will happen in `Vault._update` when trying to update global account:\\n```\\nfunction update(\\n Account memory self,\\n uint256 currentId,\\n UFixed6 assets,\\n UFixed6 shares,\\n UFixed6 deposit,\\n UFixed6 redemption\\n) internal pure {\\n self.current = currentId;\\n // @audit global account will have less assets and shares than sum of local accounts\\n (self.assets, self.shares) = (self.assets.sub(assets), self.shares.sub(shares));\\n (self.deposit, self.redemption) = (self.deposit.add(deposit), self.redemption.add(redemption));\\n}\\n```\\n
Calculate total orders to deposit and total orders to redeem (in addition to total orders overall). Then `settlementFee` should be multiplied by `deposit/orders` for `toGlobalShares` and by `redeems/orders` for `toGlobalAssets`. This weightening of `settlementFee` will make it in-line with local order weights.
Any time there are both deposits and redeems in the same oracle version, the users receive more (local) shares and assets than overall vault shares and assets increase (global). This mismatch causes:\\nSystematic increase of (sum of user shares - global shares), which can lead to bank run since the last users who try to redeem will be unable to do so due to underflow.\\nSystematic increase of (sum of user assets - global assets), which will lead to users being unable to claim their redeemed assets due to underflow.\\nThe total difference in local and global `shares+assets` equals to `settlementFee` per each oracle version with both deposits and redeems. This can add up to significant amounts (at `settlementFee` = $1 this can be $100-$1000 per day), meaning it will quickly become visible especially for point 2., because typically global claimable assets are at or near 0 most of the time, since users usually redeem and then immediately claim, thus any difference of global and local assets will quickly lead to users being unable to claim.
```\\nfunction update(\\n Account memory self,\\n uint256 currentId,\\n UFixed6 assets,\\n UFixed6 shares,\\n UFixed6 deposit,\\n UFixed6 redemption\\n) internal pure {\\n self.current = currentId;\\n // @audit global account will have less assets and shares than sum of local accounts\\n (self.assets, self.shares) = (self.assets.sub(assets), self.shares.sub(shares));\\n (self.deposit, self.redemption) = (self.deposit.add(deposit), self.redemption.add(redemption));\\n}\\n```\\n
Requested oracle versions, which have expired, must return this oracle version as invalid, but they return it as a normal version with previous version's price instead
high
Each market action requests a new oracle version which must be commited by the keepers. However, if keepers are unable to commit requested version's price (for example, no price is available at the time interval, network or keepers are down), then after a certain timeout this oracle version will be commited as invalid, using the previous valid version's price.\\nThe issue is that when this expired oracle version is used by the market (using oracle.at), the version returned will be valid (valid = true), because oracle returns version as invalid only if `price = 0`, but the `commit` function sets the previous version's price for these, thus it's not 0.\\nThis leads to market using invalid versions as if they're valid, keeping the orders (instead of invalidating them), which is a broken core functionality and a security risk for the protocol.\\nWhen requested oracle version is commited, but is expired (commited after a certain timeout), the price of the previous valid version is set to this expired oracle version:\\n```\\nfunction _commitRequested(OracleVersion memory version) private returns (bool) {\\n if (block.timestamp <= (next() + timeout)) {\\n if (!version.valid) revert KeeperOracleInvalidPriceError();\\n _prices[version.timestamp] = version.price;\\n } else {\\n // @audit previous valid version's price is set for expired version\\n _prices[version.timestamp] = _prices[_global.latestVersion]; \\n }\\n _global.latestIndex++;\\n return true;\\n}\\n```\\n\\nLater, `Market._processOrderGlobal` reads the oracle version using the `oracle.at`, invalidating the order if the version is invalid:\\n```\\nfunction _processOrderGlobal(\\n Context memory context,\\n SettlementContext memory settlementContext,\\n uint256 newOrderId,\\n Order memory newOrder\\n) private {\\n OracleVersion memory oracleVersion = oracle.at(newOrder.timestamp);\\n\\n context.pending.global.sub(newOrder);\\n if (!oracleVersion.valid) newOrder.invalidate();\\n```\\n\\nHowever, expired oracle version will return `valid = true`, because this flag is only set to `false` if price = 0:\\n```\\nfunction at(uint256 timestamp) public view returns (OracleVersion memory oracleVersion) {\\n (oracleVersion.timestamp, oracleVersion.price) = (timestamp, _prices[timestamp]);\\n oracleVersion.valid = !oracleVersion.price.isZero(); // @audit <<< valid = false only if price = 0\\n}\\n```\\n\\nThis means that `_processOrderGlobal` will treat this expired oracle version as valid and won't invalidate the order.
Add validity map along with the price map to `KeeperOracle` when recording commited price.
Market uses invalid (expired) oracle versions as if they're valid, keeping the orders (instead of invalidating them), which is a broken core functionality and a security risk for the protocol.
```\\nfunction _commitRequested(OracleVersion memory version) private returns (bool) {\\n if (block.timestamp <= (next() + timeout)) {\\n if (!version.valid) revert KeeperOracleInvalidPriceError();\\n _prices[version.timestamp] = version.price;\\n } else {\\n // @audit previous valid version's price is set for expired version\\n _prices[version.timestamp] = _prices[_global.latestVersion]; \\n }\\n _global.latestIndex++;\\n return true;\\n}\\n```\\n
When vault's market weight is set to 0 to remove the market from the vault, vault's leverage in this market is immediately set to max leverage risking position liquidation
medium
If any market has to be removed from the vault, the only way to do this is via setting this market's weight to 0. The problem is that the first vault rebalance will immediately withdraw max possible collateral from this market, leaving vault's leverage at max possible leverage, risking the vault's position liquidation. This is especially dangerous if vault's position in this removed market can not be closed due to high skew, so min position is not 0, but the leverage will be at max possible value. As a result, vault depositors can lose funds due to liquidation of vault's position in this market.\\nWhen vault is rebalanced, each market's collateral is calculated as following:\\n```\\n marketCollateral = marketContext.margin\\n .add(collateral.sub(totalMargin).mul(marketContext.registration.weight));\\n\\n UFixed6 marketAssets = assets\\n .mul(marketContext.registration.weight)\\n .min(marketCollateral.mul(LEVERAGE_BUFFER));\\n```\\n\\nFor removed markets (weight = 0), `marketCollateral` will be set to `marketContext.margin` (i.e. minimum valid collateral to have position at max leverage), `marketAssets` will be set to 0. But later the position will be adjusted in case minPosition is not 0:\\n```\\n target.position = marketAssets\\n .muldiv(marketContext.registration.leverage, marketContext.latestPrice.abs())\\n .max(marketContext.minPosition)\\n .min(marketContext.maxPosition);\\n```\\n\\nThis means that vault's position in the market with weight 0 will be at max leverage until liquidated or position can be closed.\\nThe scenario above is demonstrated in the test, change the following test in test/integration/vault/Vault.test.ts:\\n```\\n it('simple deposits and redemptions', async () => {\\n// rest of code\\n // Now we should have opened positions.\\n // The positions should be equal to (smallDeposit + largeDeposit) * leverage originalOraclePrice.\\n expect(await position()).to.equal(\\n smallDeposit.add(largeDeposit).mul(leverage).mul(4).div(5).div(originalOraclePrice),\\n )\\n expect(await btcPosition()).to.equal(\\n smallDeposit.add(largeDeposit).mul(leverage).div(5).div(btcOriginalOraclePrice),\\n )\\n\\n /*** remove all lines after this and replace with the following code: ***/\\n\\n console.log("pos1 = " + (await position()) + " pos2 = " + (await btcPosition()) + " col1 = " + (await collateralInVault()) + " col2 = " + (await btcCollateralInVault()));\\n\\n // update weight\\n await vault.connect(owner).updateWeights([parse6decimal('1.0'), parse6decimal('0')])\\n\\n // do small withdrawal to trigger rebalance\\n await vault.connect(user).update(user.address, 0, smallDeposit, 0)\\n await updateOracle()\\n\\n console.log("pos1 = " + (await position()) + " pos2 = " + (await btcPosition()) + " col1 = " + (await collateralInVault()) + " col2 = " + (await btcCollateralInVault()));\\n })\\n```\\n\\nConsole log:\\n```\\npos1 = 12224846 pos2 = 206187 col1 = 8008000000 col2 = 2002000000\\npos1 = 12224846 pos2 = 206187 col1 = 9209203452 col2 = 800796548\\n```\\n\\nNotice, that after rebalance, position in the removed market (pos2) is still the same, but the collateral (col2) reduced to minimum allowed.
Ensure that the market's collateral is based on leverage even if `weight = 0`
Market removed from the vault (weight set to 0) is put at max leverage and has a high risk of being liquidated, thus losing vault depositors funds.
```\\n marketCollateral = marketContext.margin\\n .add(collateral.sub(totalMargin).mul(marketContext.registration.weight));\\n\\n UFixed6 marketAssets = assets\\n .mul(marketContext.registration.weight)\\n .min(marketCollateral.mul(LEVERAGE_BUFFER));\\n```\\n

These datasets serve as a basis for other datasets in this family which are built for tasks like Classification or Seq2Seq generation.

1. Smart Contract Vulnerabilities with Explanations (vulnerable-w-explanations)

This repository offers two datasets of Solidity functions, This dataset comprises vulnerable Solidity functions audited by 5 auditing companies:

(Codehawks, ConsenSys, Cyfrin, Sherlock, Trust Security). These audits are compiled by Solodit.

Usage

from datasets import load_dataset
dataset = load_dataset(
  "msc-smart-contract-audition/vulnerable-functions-base",
  split='train',
  escapechar='\\',
)
Field Description
1. name Title of audit report
2. severity Severity of vulnerability (Low, Medium, High)
3. description Description/Explanation of the vulnerability
4. recommendation Recommended mitigation of the vulnerability
5. impact Explains how the vulnerability affects the smart contract (Optional)
6. function Raw vulnerable solidity code (Sometimes this could be inaccurate. Best efforts were made to clean-up the dataset but some rows might include other programming languages e.g. javascript)

2. Verified functions (verified-functions)

This repository also includes a dataset with functions with no known vulnerabilities. They were scraped-off from Etherscan. Specifically, the functions are a part of the top 500 auditted contracts holding at least 1 ETH.

Usage

from datasets import load_dataset
dataset = load_dataset(
  "msc-smart-contract-audition/vulnerable-functions-base",
  name="verified-functions",
  split='train',
  escapechar='\\',
)
Field Description
1. function Raw solidity code

Additional Info

  • The newline characters are escaped (i.e. \\n)
  • The dataset has a single split train (hence the adjusted loading isntructions).
Downloads last month
56
Edit dataset card