Activation functions significantly influence classification accuracy in both training and testing. The most prevalent choice is Relu. However, Relu is non-negative and has a zero gradient for negative inputs. This can lead to inactive outputs due to a lack of weight updates. Additionally, the vanishing gradients problem can slow down learning when a constant zero gradient is encountered, slowing convergence. In response to these challenges, we present a pioneering solution: the Balanced Learnable Activation Function (BLAF), featuring learnable hyperparameters
and
. In contrast to Relu, BLAF incorporates negative values, shifting activation means toward zero. Mean activations closer to zero facilitate faster learning by aligning the gradient closer to the natural gradient, thus improving convergence and accuracy. To maintain input magnitudes during training and accelerate convergence in the modified deep neural networks, we propose to utilize appropriate initial values for
and
. The learnable hyperparameters
and
control the saturation point for negative inputs. When BLAF reaches this point, the small derivative reduces variation and information propagation to the next layer. Consequently, producing simpler, faster representations. To avoid repetitive iterations in activation functions with fixed hyperparameters, BLAF automatically adjusts
and
during training. This adaptation enhances BLAF’s classification adaptability by efficiently capturing complex data patterns and improving learning and generalization across diverse datasets. Unlike conventional approaches that rely on pre-designed architectures with default activation functions, we reconstruct well-known networks InceptionV3 and ResNet50 from scratch. We train these networks with BLAF and other activation functions to ensure an impartial evaluation.