The code below finds the minimum of the simple function `x`^{2}. The objective function in this case computes the function value and/or the gradient, depending upon the value of the incoming indicator vector, `ind`. This makes it simple to avoid duplication of calculations which are required for the objective function and the gradient when computing more complicated functions.

```
//Load optmt library
library optmt;
//Objective function to be minimized
proc fct(x, ind);
//Declare 'mm' to be a modelResults
//struct, local to this procedure
struct modelResults mm;
//If the first element of the indicator vector
//is non-zero, calculate the objective function
if ind[1];
//Assign the value of the objective function to the
//'function' member of the 'modelResults' struct
mm.function = x.^2;
endif;
//If the second element of the indicator vector
//is non-zero, calculate the gradient
if ind[2];
//Assign the value of the objective function to the
//'function' member of the 'modelResults' struct
mm.gradient = 2.*x;
endif;
//Return the modelResults structure
retp(mm);
endp;
// Starting parameter value
x0 = 1;
//Declare 'out' to be an optmtResults struct
//to hold the optimization results
struct optmtResults out;
//Minimize objective function
out = optmt(&fct,x0);
//Print optimtization results
optmtPrt(out);
```

The above code will print the simple report below. It shows that `OPTMT` has found the minimum of our function `x`^{2} when `x` is equal to 0. We also see that the function value is equal to 0 which we expect and no parameter bounds were active, because the Lagrangians are an empty matrix.

=========================================
Optmt Version 2.0.1
=========================================
Return code = 0
Function value = 0.00000
Convergence : normal convergence
Parameters Estimates Gradient
----------------------------------------
x[1,1] 0.0000 0.0000
Number of iterations 2
Minutes to convergence 0.00000
Lagrangians
{}