Important changes to repositories hosted on mbed.com
Mbed hosted mercurial repositories are deprecated and are due to be permanently deleted in July 2026.
To keep a copy of this software download the repository Zip archive or clone locally using Mercurial.
It is also possible to export all your personal repositories from the account settings page.
Neural Network Activation Functions
[Neural Network Functions]
Perform activation layers, including ReLU (Rectified Linear Unit), sigmoid and tanh. More...
Functions | |
| void | arm_nn_activations_direct_q15 (q15_t *data, uint16_t size, uint16_t int_width, arm_nn_activation_type type) |
| Q15 neural network activation function using direct table look-up. | |
| void | arm_nn_activations_direct_q7 (q7_t *data, uint16_t size, uint16_t int_width, arm_nn_activation_type type) |
| Q7 neural network activation function using direct table look-up. | |
| void | arm_relu_q15 (q15_t *data, uint16_t size) |
| Q15 RELU function. | |
| void | arm_relu_q7 (q7_t *data, uint16_t size) |
| Q7 RELU function. | |
Detailed Description
Perform activation layers, including ReLU (Rectified Linear Unit), sigmoid and tanh.
Function Documentation
| void arm_nn_activations_direct_q15 | ( | q15_t * | data, |
| uint16_t | size, | ||
| uint16_t | int_width, | ||
| arm_nn_activation_type | type | ||
| ) |
Q15 neural network activation function using direct table look-up.
- Parameters:
-
[in,out] data pointer to input [in] size number of elements [in] int_width bit-width of the integer part, assume to be smaller than 3 [in] type type of activation functions
- Returns:
- none.
This is the direct table look-up approach.
Assume here the integer part of the fixed-point is <= 3. More than 3 just not making much sense, makes no difference with saturation followed by any of these activation functions.
Definition at line 61 of file arm_nn_activations_q15.c.
| void arm_nn_activations_direct_q7 | ( | q7_t * | data, |
| uint16_t | size, | ||
| uint16_t | int_width, | ||
| arm_nn_activation_type | type | ||
| ) |
Q7 neural network activation function using direct table look-up.
- Parameters:
-
[in,out] data pointer to input [in] size number of elements [in] int_width bit-width of the integer part, assume to be smaller than 3 [in] type type of activation functions
- Returns:
- none.
This is the direct table look-up approach.
Assume here the integer part of the fixed-point is <= 3. More than 3 just not making much sense, makes no difference with saturation followed by any of these activation functions.
Definition at line 61 of file arm_nn_activations_q7.c.
| void arm_relu_q15 | ( | q15_t * | data, |
| uint16_t | size | ||
| ) |
Q15 RELU function.
- Parameters:
-
[in,out] data pointer to input [in] size number of elements
- Returns:
- none.
Optimized relu with QSUB instructions.
Definition at line 55 of file arm_relu_q15.c.
| void arm_relu_q7 | ( | q7_t * | data, |
| uint16_t | size | ||
| ) |
Q7 RELU function.
- Parameters:
-
[in,out] data pointer to input [in] size number of elements
- Returns:
- none.
Optimized relu with QSUB instructions.
Definition at line 55 of file arm_relu_q7.c.
Generated on Tue Jul 12 2022 16:47:30 by
1.7.2