Important changes to repositories hosted on mbed.com
Mbed hosted mercurial repositories are deprecated and are due to be permanently deleted in July 2026.
To keep a copy of this software download the repository Zip archive or clone locally using Mercurial.
It is also possible to export all your personal repositories from the account settings page.
Softmax Functions
[Neural Network Functions]
  EXP(2) based softmax function. More...
Functions | |
| void | arm_softmax_q15 (const q15_t *vec_in, const uint16_t dim_vec, q15_t *p_out) | 
| Q15 softmax function.   | |
| void | arm_softmax_q7 (const q7_t *vec_in, const uint16_t dim_vec, q7_t *p_out) | 
| Q7 softmax function.   | |
Detailed Description
EXP(2) based softmax function.
Function Documentation
| void arm_softmax_q15 | ( | const q15_t * | vec_in, | 
| const uint16_t | dim_vec, | ||
| q15_t * | p_out | ||
| ) | 
Q15 softmax function.
- Parameters:
 - 
  
[in] vec_in pointer to input vector [in] dim_vec input vector dimention [out] p_out pointer to output vector  
- Returns:
 - none.
 
Here, instead of typical e based softmax, we use 2-based softmax, i.e.,:
y_i = 2^(x_i) / sum(2^x_j)
The relative output will be different here. But mathematically, the gradient will be the same with a log(2) scaling factor.
Definition at line 63 of file arm_softmax_q15.c.
| void arm_softmax_q7 | ( | const q7_t * | vec_in, | 
| const uint16_t | dim_vec, | ||
| q7_t * | p_out | ||
| ) | 
Q7 softmax function.
- Parameters:
 - 
  
[in] vec_in pointer to input vector [in] dim_vec input vector dimention [out] p_out pointer to output vector  
- Returns:
 - none.
 
Here, instead of typical natural logarithm e based softmax, we use 2-based softmax here, i.e.,:
y_i = 2^(x_i) / sum(2^x_j)
The relative output will be different here. But mathematically, the gradient will be the same with a log(2) scaling factor.
Definition at line 63 of file arm_softmax_q7.c.
Generated on Tue Jul 12 2022 16:47:30 by
 1.7.2