Prototypes help to explain the predictions of deep classification models for time series. However, most models learn prototypes by randomly initializing an uncertain number of low-discriminative prototypes, which may lead to unstable models and unreliable results. To address these issues, we propose a new class Discriminative Prototype Learning Network (DPL-Net), which learns an appropriate number of class-discriminative prototypes, thus improving classification performance. Specifically, the proposed Prototype Initialization Mechanism (PIM) introduces a new proximity metric based on the silhouette coefficient and statistical metrics. It facilitates the automatic determination of the class-discriminative prototypes for each class. Then, the encoder layer encodes the prototypes derived from PIM and the input series using one-dimensional convolutional neural networks (1D-CNN). Finally, the prototype classification layer optimizes the prototypes according to the regularization terms, while simultaneously classifying the input sequence based on its similarity to the updated prototypes. The comparison experiments are conducted on 26 UCR datasets compared with 10 baselines. The results show that our proposed approach achieves the best accuracy on 11 datasets. Specifically, our method outperforms PIP, CSSL, and LSS by an average of 16.33%, 9.77% and 5.96% on 22, 14 and 16 datasets, respectively. The interpretability experimental results and the application analysis on spectral data indicate that the learned prototypes can provide reasonable explanations for the classification results of the model.