-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Added a casadi function interface with tests (example to come) #903
base: master
Are you sure you want to change the base?
Conversation
githubactions: conda githubaction: once for all by mamba
@Ipuch |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed 13 of 13 files at r1, all commit messages.
Reviewable status: all files reviewed, 19 unresolved discussions (waiting on @pariterre)
bioptim/interfaces/casadi_function_interface.py
line 9 at r1 (raw file):
class CasadiFunctionInterface(Callback, ABC): def __init__(self, name: str, opts={}): self.reverse_function = None
Why is this called before the super ? I've seen with you that the good practice is to set stuff after the super right ?
bioptim/interfaces/casadi_function_interface.py
line 13 at r1 (raw file):
super(CasadiFunctionInterface, self).__init__() self.construct(name, opts) # Defines the self.mx_in() self._cached_mx_in = super().mx_in()
Do we feel like we need "_cached" or would the underscore be enough ?
bioptim/interfaces/casadi_function_interface.py
line 93 at r1 (raw file):
""" pass
no hessian needed ?
bioptim/interfaces/casadi_function_interface.py
line 100 at r1 (raw file):
return self._cached_mx_in def get_n_in(self):
isnt it a @Property?
bioptim/interfaces/casadi_function_interface.py
line 115 at r1 (raw file):
return [self.function(*args[0])] def has_reverse(self, nadj):
What does nadj
stand for ? Can we add a docstring or a find a better name ? add a hint ?
bioptim/interfaces/casadi_function_interface.py
line 118 at r1 (raw file):
return nadj == 1 def get_reverse(self, nadj, name, inames, onames, opts):
I need docstrings for the inputs and hints
bioptim/interfaces/casadi_function_interface.py
line 119 at r1 (raw file):
def get_reverse(self, nadj, name, inames, onames, opts): class Reverse(Callback):
- I would write this in another script or out of the class at least.
- What reverse stand for ?
bioptim/interfaces/casadi_function_interface.py
line 143 at r1 (raw file):
def eval(self, arg): # Find the index to evaluate from the last parameter which is a DM vector of 0s with one value being 1 index = arg[-1].toarray()[:, 0].tolist().index(1.0)
It's an independent method, as you felt the need for a comment?
Code quote:
index = arg[-1].toarray()[:, 0].tolist().index(1.0)
bioptim/interfaces/casadi_function_interface.py
line 156 at r1 (raw file):
cx_in = self.mx_in() nominal_out = self.mx_out() adj_seed = self.mx_out()
These three lines are used at two place, is it a method : input_mx
or all inputs
.
Code quote:
cx_in = self.mx_in()
nominal_out = self.mx_out()
adj_seed = self.mx_out()
bioptim/examples/deep_neural_network/pytorch_ocp.py
line 109 at r1 (raw file):
def save_me(self, path: str) -> None: layer_node_count = tuple( [model.in_features for model in self._forward_model if isinstance(model, torch.nn.Linear)]
separete the two lists :
good_name = [model.in_features for model in self._forward_model if isinstance(model, torch.nn.Linear)]
another_good_name = [self._forward_model[-1].out_features]
bioptim/examples/deep_neural_network/pytorch_ocp.py
line 176 at r1 (raw file):
# Populate the return values all_predictions = torch.cat((all_predictions, output)) all_targets = torch.cat((all_targets, target[None, :]))
It seems like an independant method
self.update_model_for_each_prediction
Code quote:
# If it is training, we are updating the model with each prediction, we therefore need to do it in a loop
all_predictions = torch.tensor([]).to(self.get_torch_device())
all_targets = torch.tensor([]).to(self.get_torch_device())
for input, target in zip(*targets):
self._optimizer.zero_grad()
# Get the predictions and targets
output = self(input[None, :])
# Do some machine learning shenanigans
current_loss = self._loss_function.forward(output, target[None, :])
current_loss.backward() # Backpropagation
self._optimizer.step() # Updating weights
# Populate the return values
all_predictions = torch.cat((all_predictions, output))
all_targets = torch.cat((all_targets, target[None, :]))
bioptim/examples/deep_neural_network/pytorch_ocp.py
line 257 at r1 (raw file):
) return [torch.cat((q, qdot, tau), dim=1), qddot]
-
I would put this into script into a utils, in the folder deep_neuralnet, because it is compromise the content of the example. And this function is not useful to understand the example.
-
Can you add one line string for what is the spirit of the generated dataset, it seems like spanning the bounds in a specific way. Something like:
"""
distribution of q, qdot, tau within bounds of the model to generate qddot through forzqrd dynamics
Returns
Inputs and outputs of a expected neural net
"""
Code quote:
def generate_dataset(biorbd_model: biorbd.Model, data_point_count: int) -> list[torch.Tensor]:
q_ranges = np.array(
[[[q_range.min(), q_range.max()] for q_range in segment.QRanges()] for segment in biorbd_model.segments()]
).squeeze()
qdot_ranges = np.array(
[
[[qdot_range.min(), qdot_range.max()] for qdot_range in segment.QdotRanges()]
for segment in biorbd_model.segments()
]
).squeeze()
tau_ranges = np.array([-100, 100] * biorbd_model.nbGeneralizedTorque()).reshape(-1, 2)
tau_ranges[1, :] = 0
q = torch.rand(data_point_count, biorbd_model.nbQ()) * (q_ranges[:, 1] - q_ranges[:, 0]) + q_ranges[:, 0]
qdot = (
torch.rand(data_point_count, biorbd_model.nbQdot()) * (qdot_ranges[:, 1] - qdot_ranges[:, 0])
+ qdot_ranges[:, 0]
)
tau = (
torch.rand(data_point_count, biorbd_model.nbGeneralizedTorque()) * (tau_ranges[:, 1] - tau_ranges[:, 0])
+ tau_ranges[:, 0]
)
q = q.to(torch.float)
qdot = qdot.to(torch.float)
tau = tau.to(torch.float)
qddot = torch.zeros(data_point_count, biorbd_model.nbQddot())
for i in range(data_point_count):
qddot[i, :] = torch.tensor(
biorbd_model.ForwardDynamics(np.array(q[i, :]), np.array(qdot[i, :]), np.array(tau[i, :])).to_array()
)
return [torch.cat((q, qdot, tau), dim=1), qddot]
bioptim/examples/deep_neural_network/pytorch_ocp.py
line 267 at r1 (raw file):
validation_data = generate_dataset(biorbd_model, data_point_count=3000) if force_new_training or not os.path.isfile("models/trained_pendulum_model.pt"):
has_a_no_trained_model = not os.path.isfile("models/trained_pendulum_model.pt")
Code quote:
not os.path.isfile("models/trained_pendulum_model.pt")
bioptim/examples/deep_neural_network/pytorch_ocp.py
line 270 at r1 (raw file):
model = NeuralNetworkModel(layer_node_count=(6, 8, 2), dropout_probability=0.2) model.train_me(training_data, validation_data, max_epochs=300) model.save_me("models/trained_pendulum_model.pt")
can you bring training _data and validation_data under the if case ?
Code quote:
model = NeuralNetworkModel(layer_node_count=(6, 8, 2), dropout_probability=0.2)
model.train_me(training_data, validation_data, max_epochs=300)
model.save_me("models/trained_pendulum_model.pt")
bioptim/models/torch/torch_model.py
line 56 at r1 (raw file):
""" def __init__(self, torch_model: torch.nn.Module, device="cuda" if torch.cuda.is_available() else "cpu"):
Can we move this if in the init ?
Code quote:
device="cuda" if torch.cuda.is_available() else "cpu"
bioptim/models/torch/torch_model.py
line 61 at r1 (raw file):
) self._nb_dof = torch_model.size_in // 3
Is this obvious?
bioptim/models/torch/torch_model.py
line 72 at r1 (raw file):
self.parameters = MX.sym("parameters_mx", 0, 1) @property
Do you know about cached_property ? We may think about it sometime.
bioptim/models/torch/torch_model.py
line 104 at r1 (raw file):
"forward_dynamics", [self.q, self.qdot, self.tau, self.external_forces, self.parameters], [self._dynamic_model(vertcat(self.q, self.qdot, self.tau).T).T],
horzcat ?
Code quote:
vertcat(self.q, self.qdot, self.tau).T
bioptim/interfaces/interface_utils.py
line 177 at r1 (raw file):
except RuntimeError: # This happens mostly when there is a Newton decent in the penalty. Raise warning to notify the user warnings.warn("Could not expand during the shake_tree.")
I would add that the problem is because of Bioptim
"Bioptim could not expand during the shake_tree."
All Submissions:
New Feature Submissions:
black . -l120 --exclude "external/*"
)?Changes to Core Features:
This change is