Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove recovery related code #797

Merged
merged 5 commits into from
Oct 22, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
11 changes: 4 additions & 7 deletions Readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,17 +37,14 @@ Sequencer has 6 API routes.
indeed in the tree. The inclusion proof is then returned to the API caller.
3. `/deleteIdentity` - Takes an identity commitment hash, ensures that it exists and hasn't been deleted yet. This
identity is then scheduled for deletion.
4. `/recoverIdentity` - Takes two identity commitment hashes. The first must exist and will be scheduled for deletion
and the other will be inserted as a replacement after the first identity has been deleted and a set amount of time (
depends on configuration parameters) has passed.
5. `/verifySemaphoreProof` - This call takes root, signal hash, nullifier hash, external nullifier hash and a proof.
4. `/verifySemaphoreProof` - This call takes root, signal hash, nullifier hash, external nullifier hash and a proof.
The proving key is fetched based on the depth index, and verification key as well.
The list of prime fields is created based on request input mentioned before, and then we proceed to verify the proof.
Sequencer uses groth16 zk-SNARK implementation.
The API call returns the proof as a response.
6. `/addBatchSize` - Adds a prover with specific batch size to a list of provers.
7. `/removeBatchSize` - Removes the prover based on batch size.
8. `/listBatchSizes` - Lists all provers that are added to the Sequencer.
5. `/addBatchSize` - Adds a prover with specific batch size to a list of provers.
6. `/removeBatchSize` - Removes the prover based on batch size.
7. `/listBatchSizes` - Lists all provers that are added to the Sequencer.

## Getting Started

Expand Down
13 changes: 13 additions & 0 deletions schemas/database/016_remove_recovery.down.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
CREATE TABLE recoveries (
existing_commitment BYTEA NOT NULL UNIQUE,
new_commitment BYTEA NOT NULL UNIQUE
);

ALTER TABLE unprocessed_identities
ADD COLUMN eligibility TIMESTAMPTZ,
ADD COLUMN status VARCHAR(50) NOT NULL,
ADD COLUMN processed_at TIMESTAMPTZ,
ADD COLUMN error_message TEXT;

ALTER TABLE unprocessed_identities
DROP CONSTRAINT unique_commitment;
11 changes: 11 additions & 0 deletions schemas/database/016_remove_recovery.up.sql
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
DROP TABLE recoveries;

ALTER TABLE unprocessed_identities
DROP COLUMN eligibility,
DROP COLUMN status,
DROP COLUMN processed_at,
DROP COLUMN error_message;

ALTER TABLE unprocessed_identities
ADD CONSTRAINT unique_commitment UNIQUE (commitment);

29 changes: 0 additions & 29 deletions schemas/openapi.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -62,26 +62,6 @@ paths:
schema:
description: 'Identity could not be queued for deletion'
type: 'string'
/recoverIdentity:
post:
summary: 'Queues a recovery request, deleting the previous identity specified and inserting the new one.
New insertions must wait a specified time delay before being included in the merkle tree'
requestBody:
required: true
content:
application/json:
schema:
$ref: '#/components/schemas/RecoveryRequest'
responses:
'202':
description: 'Identity has been successfully queued for recovery'
'400':
description: 'Invalid request'
content:
application/json:
schema:
description: 'Identity could not be queued for recovery'
type: 'string'
/inclusionProof:
post:
summary: 'Get Merkle inclusion proof'
Expand Down Expand Up @@ -152,15 +132,6 @@ paths:

components:
schemas:
RecoveryRequest:
type: object
properties:
previousIdentityCommitment:
type: string
pattern: '^[A-F0-9]{64}$'
newIdentityCommitment:
type: string
pattern: '^[A-F0-9]{64}$'
IdentityCommitment:
type: object
properties:
Expand Down
14 changes: 2 additions & 12 deletions src/app.rs
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ use crate::identity::processor::{
};
use crate::identity::validator::IdentityValidator;
use crate::identity_tree::initializer::TreeInitializer;
use crate::identity_tree::{Hash, InclusionProof, RootItem, TreeState, TreeVersionOps};
use crate::identity_tree::{Hash, RootItem, TreeState, TreeVersionOps};
use crate::prover::map::initialize_prover_maps;
use crate::prover::repository::ProverRepository;
use crate::prover::{ProverConfig, ProverType};
Expand Down Expand Up @@ -164,7 +164,7 @@ impl App {
return Err(ServerError::DuplicateCommitment);
}

tx.insert_new_identity(commitment, Utc::now()).await?;
tx.insert_unprocessed_identity(commitment).await?;

tx.commit().await?;

Expand Down Expand Up @@ -311,16 +311,6 @@ impl App {
return Err(ServerError::InvalidCommitment);
}

if let Some(error_message) = self.database.get_unprocessed_error(commitment).await? {
return Ok(InclusionProof {
root: None,
proof: None,
message: error_message
.or_else(|| Some("identity exists but has not yet been processed".to_string())),
}
.into());
}

let item = self
.database
.get_identity_leaf_index(commitment)
Expand Down
20 changes: 0 additions & 20 deletions src/config.rs
Original file line number Diff line number Diff line change
Expand Up @@ -69,18 +69,6 @@ pub struct AppConfig {
#[serde(default = "default::min_batch_deletion_size")]
pub min_batch_deletion_size: usize,

/// The parameter to control the delay between mining a deletion batch and
/// inserting the recovery identities
///
/// The sequencer will insert the recovery identities after
/// max_epoch_duration_seconds + root_history_expiry) seconds have passed
///
/// By default the value is set to 0 so the sequencer will only use
/// root_history_expiry
#[serde(with = "humantime_serde")]
#[serde(default = "default::max_epoch_duration")]
pub max_epoch_duration: Duration,

/// The maximum number of windows to scan for finalization logs
#[serde(default = "default::scanning_window_size")]
pub scanning_window_size: u64,
Expand Down Expand Up @@ -284,10 +272,6 @@ pub mod default {
100
}

pub fn max_epoch_duration() -> Duration {
Duration::from_secs(0)
}

pub fn scanning_window_size() -> u64 {
100
}
Expand Down Expand Up @@ -375,7 +359,6 @@ mod tests {
batch_insertion_timeout = "3m"
batch_deletion_timeout = "1h"
min_batch_deletion_size = 100
max_epoch_duration = "0s"
scanning_window_size = 100
scanning_chain_head_offset = 0
time_between_scans = "30s"
Expand Down Expand Up @@ -428,7 +411,6 @@ mod tests {
batch_insertion_timeout = "3m"
batch_deletion_timeout = "1h"
min_batch_deletion_size = 100
max_epoch_duration = "0s"
scanning_window_size = 100
scanning_chain_head_offset = 0
time_between_scans = "30s"
Expand Down Expand Up @@ -466,7 +448,6 @@ mod tests {
SEQ__APP__BATCH_INSERTION_TIMEOUT=3m
SEQ__APP__BATCH_DELETION_TIMEOUT=1h
SEQ__APP__MIN_BATCH_DELETION_SIZE=100
SEQ__APP__MAX_EPOCH_DURATION=0s
SEQ__APP__SCANNING_WINDOW_SIZE=100
SEQ__APP__SCANNING_CHAIN_HEAD_OFFSET=0
SEQ__APP__TIME_BETWEEN_SCANS=30s
Expand Down Expand Up @@ -509,7 +490,6 @@ mod tests {
SEQ__APP__BATCH_INSERTION_TIMEOUT=3m
SEQ__APP__BATCH_DELETION_TIMEOUT=1h
SEQ__APP__MIN_BATCH_DELETION_SIZE=100
SEQ__APP__MAX_EPOCH_DURATION=0s
SEQ__APP__SCANNING_WINDOW_SIZE=100
SEQ__APP__SCANNING_CHAIN_HEAD_OFFSET=0
SEQ__APP__TIME_BETWEEN_SCANS=30s
Expand Down
44 changes: 3 additions & 41 deletions src/contracts/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -2,18 +2,17 @@
pub mod abi;
pub mod scanner;

use anyhow::{anyhow, bail, Context};
use anyhow::{anyhow, bail};
use ethers::providers::Middleware;
use ethers::types::{H256, U256};
use ethers::types::U256;
use tracing::{error, info, instrument};

use self::abi::{BridgedWorldId, DeleteIdentitiesCall, WorldId};
use self::abi::{BridgedWorldId, WorldId};
use crate::config::Config;
use crate::ethereum::{Ethereum, ReadProvider};
use crate::identity::processor::TransactionId;
use crate::prover::identity::Identity;
use crate::prover::Proof;
use crate::utils::index_packing::unpack_indices;

/// A structure representing the interface to the batch-based identity manager
/// contract.
Expand All @@ -22,7 +21,6 @@ pub struct IdentityManager {
ethereum: Ethereum,
abi: WorldId<ReadProvider>,
secondary_abis: Vec<BridgedWorldId<ReadProvider>>,
tree_depth: usize,
}

impl IdentityManager {
Expand Down Expand Up @@ -84,22 +82,15 @@ impl IdentityManager {
secondary_abis.push(abi);
}

let tree_depth = config.tree.tree_depth;

let identity_manager = Self {
ethereum,
abi,
secondary_abis,
tree_depth,
};

Ok(identity_manager)
}

pub async fn root_history_expiry(&self) -> anyhow::Result<U256> {
Ok(self.abi.get_root_history_expiry().call().await?)
}

#[instrument(level = "debug", skip(self, identity_commitments, proof_data))]
pub async fn register_identities(
&self,
Expand Down Expand Up @@ -171,35 +162,6 @@ impl IdentityManager {
Ok(latest_root)
}

/// Fetches the identity commitments from a
/// `deleteIdentities` transaction by tx hash
#[instrument(level = "debug", skip_all)]
pub async fn fetch_deletion_indices_from_tx(
&self,
tx_hash: H256,
) -> anyhow::Result<Vec<usize>> {
let provider = self.ethereum.provider();

let tx = provider
.get_transaction(tx_hash)
.await?
.context("Missing tx")?;

use ethers::abi::AbiDecode;
let delete_identities = DeleteIdentitiesCall::decode(&tx.input)?;

let packed_deletion_indices: &[u8] = delete_identities.packed_deletion_indices.as_ref();
let indices = unpack_indices(packed_deletion_indices);

let padding_index = 2u32.pow(self.tree_depth as u32);

Ok(indices
.into_iter()
.filter(|idx| *idx != padding_index)
.map(|x| x as usize)
.collect())
}

#[instrument(level = "debug", skip_all)]
pub async fn is_root_mined(&self, root: U256) -> anyhow::Result<bool> {
let (root_on_mainnet, ..) = self.abi.query_root(root).call().await?;
Expand Down
Loading
Loading