Robert Lopez / CMSIS5
Embed: (wiki syntax)

« Back to documentation index

Softmax Functions

Softmax Functions
[Neural Network Functions]

EXP(2) based softmax function. More...

Functions

void arm_softmax_q15 (const q15_t *vec_in, const uint16_t dim_vec, q15_t *p_out)
 Q15 softmax function.
void arm_softmax_q7 (const q7_t *vec_in, const uint16_t dim_vec, q7_t *p_out)
 Q7 softmax function.

Detailed Description

EXP(2) based softmax function.


Function Documentation

void arm_softmax_q15 ( const q15_t *  vec_in,
const uint16_t  dim_vec,
q15_t *  p_out 
)

Q15 softmax function.

Parameters:
[in]vec_inpointer to input vector
[in]dim_vecinput vector dimention
[out]p_outpointer to output vector
Returns:
none.

Here, instead of typical e based softmax, we use 2-based softmax, i.e.,:

y_i = 2^(x_i) / sum(2^x_j)

The relative output will be different here. But mathematically, the gradient will be the same with a log(2) scaling factor.

Definition at line 63 of file arm_softmax_q15.c.

void arm_softmax_q7 ( const q7_t *  vec_in,
const uint16_t  dim_vec,
q7_t *  p_out 
)

Q7 softmax function.

Parameters:
[in]vec_inpointer to input vector
[in]dim_vecinput vector dimention
[out]p_outpointer to output vector
Returns:
none.

Here, instead of typical natural logarithm e based softmax, we use 2-based softmax here, i.e.,:

y_i = 2^(x_i) / sum(2^x_j)

The relative output will be different here. But mathematically, the gradient will be the same with a log(2) scaling factor.

Definition at line 63 of file arm_softmax_q7.c.