在旋轉(zhuǎn)機(jī)械故障診斷領(lǐng)域,如何從強(qiáng)噪聲干擾的振動(dòng)信號(hào)中提取敏感特征始終是核心難題。經(jīng)典的深度學(xué)習(xí)模型如卷積神經(jīng)網(wǎng)絡(luò)(Convolutional Neural Networks, CNN)在實(shí)驗(yàn)室干凈數(shù)據(jù)集上表現(xiàn)優(yōu)異,但在面對(duì)復(fù)雜的工業(yè)實(shí)測(cè)數(shù)據(jù)時(shí),冗余的噪聲特征可能會(huì)導(dǎo)致模型準(zhǔn)確率有所下降。為了解決這一問題,論文“Deep Residual Shrinkage Networks for Fault Diagnosis”提出了一種創(chuàng)新的結(jié)構(gòu)——深度殘差收縮網(wǎng)絡(luò)(Deep Residual Shrinkage Network, DRSN)。
DRSN的核心思想在于將“軟閾值化 (Soft Thresholding)”這一經(jīng)典的信號(hào)處理降噪技術(shù)集成到殘差網(wǎng)絡(luò)中。通過引入注意力機(jī)制,模型能夠自適應(yīng)地學(xué)習(xí)每一組特征圖的收縮閾值。在特征傳遞的過程中,那些接近于零的、被視為噪聲的特征會(huì)被自動(dòng)置為零,而強(qiáng)特征則得以保留。這種結(jié)構(gòu)有助于提高模型在強(qiáng)噪聲環(huán)境下的魯棒性(即抗干擾能力),還實(shí)現(xiàn)了端到端的自適應(yīng)特征提取,無需依賴復(fù)雜的專家先驗(yàn)知識(shí)。
1.從殘差塊到自適應(yīng)收縮
DRSN的核心組件是“帶有通道級(jí)閾值的殘差收縮構(gòu)建塊 (Residual Shrinkage Building Unit with Channel-wise thresholds, RSBU-CW)”。該模塊在傳統(tǒng)殘差學(xué)習(xí)的基礎(chǔ)上,并行了一個(gè)用于計(jì)算閾值的子網(wǎng)絡(luò)。
在RSBU-CW模塊中,輸入特征經(jīng)過兩次卷積和批歸一化(Batch Normalization, BN)處理后,會(huì)進(jìn)入一個(gè)注意力分支。首先,通過取絕對(duì)值和全局平均池化(Global Average Pooling, GAP)將空間維度的特征壓縮,計(jì)算出每個(gè)通道的絕對(duì)值平均值。接著,利用兩個(gè)全連接層和Sigmoid激活函數(shù)學(xué)習(xí)出一個(gè)縮放因子α,取值范圍在0到1之間。收縮閾值τ的計(jì)算公式為:τ = α * average(abs(x))。
得到閾值后,模型應(yīng)用軟閾值化算子處理特征圖,公式為:y = sign(x) * max(abs(x) - τ, 0)。這種設(shè)計(jì)允許模型為每個(gè)特征通道獨(dú)立設(shè)置閾值。
圖1. 深度殘差收縮網(wǎng)絡(luò)
2. 實(shí)驗(yàn)設(shè)置
為了驗(yàn)證DRSN-CW的性能,選擇了軸承診斷領(lǐng)域的標(biāo)準(zhǔn)基準(zhǔn)——西儲(chǔ)大學(xué)(CWRU)軸承數(shù)據(jù)集。實(shí)驗(yàn)涵蓋了正常狀態(tài)以及內(nèi)圈故障、外圈故障和滾珠故障10類標(biāo)簽。每個(gè)樣本采用1024個(gè)采樣點(diǎn)的滑動(dòng)窗口進(jìn)行切分。
圖2. 類別劃分
在數(shù)據(jù)工程模塊,除了常規(guī)的標(biāo)準(zhǔn)化處理,還設(shè)計(jì)了一套“在線實(shí)時(shí)增強(qiáng)”流水線,以模擬極端的工業(yè)場(chǎng)景。這包括:
(1)環(huán)移位(Rolling Shift):模擬傳感器采樣起始時(shí)刻的不確定性。
(2)瞬態(tài)沖擊注入:模擬機(jī)器偶爾出現(xiàn)的磕碰干擾。
(3)加性高斯白噪聲(Additive White Gaussian Noise, AWGN):在訓(xùn)練過程中動(dòng)態(tài)混合不同信噪比(Signal-to-Noise Ratio, SNR)的噪聲,盡量讓模型在各種環(huán)境下保持特征一致性。特別地,構(gòu)建了一個(gè)SNR為-8dB的測(cè)試環(huán)境,這在工業(yè)診斷中屬于較強(qiáng)的背景噪聲干擾。
具體的TensorFlow代碼如下:
""" 項(xiàng)目名稱:深度殘差收縮網(wǎng)絡(luò) (DRSN-CW) - 旋轉(zhuǎn)機(jī)械故障診斷復(fù)現(xiàn) 論文參考:Zhao, M., et al. "Deep Residual Shrinkage Networks for Fault Diagnosis," IEEE TII, 2020. 算法核心邏輯: 1. 軟閾值化 (Soft Thresholding):通過非線性映射,將接近于零的噪聲特征置為零,保留強(qiáng)特征。 2. 注意力機(jī)制 (Attention):利用小型子網(wǎng)絡(luò)自動(dòng)學(xué)習(xí)每個(gè)通道的收縮閾值,實(shí)現(xiàn)自適應(yīng)去噪。 3. 殘差學(xué)習(xí) (Residual Learning):解決深層網(wǎng)絡(luò)梯度消失問題,確保特征傳遞的穩(wěn)定性。 """ import os import sys import logging import numpy as np import scipy.io as sio import tensorflow as tf from tensorflow.keras import layers, Model, regularizers from sklearn.model_selection import train_test_split as split_data # ============================================================================= # 1. 環(huán)境與資源配置模塊 # ============================================================================= logging.basicConfig(level=logging.INFO, format='[%(asctime)s] %(levelname)s: %(message)s') class GPUConfig: """ 計(jì)算資源管理器:負(fù)責(zé) TensorFlow 運(yùn)行時(shí)環(huán)境的初始化與硬件加速配置。 """ @staticmethod def init_tf(): """ 配置計(jì)算后端: - 抑制冗余日志:減少非關(guān)鍵性的系統(tǒng)警告。 - 顯存按需分配:防止 TensorFlow 啟動(dòng)時(shí)預(yù)占全部顯存,允許與其他進(jìn)程共用 GPU。 """ os.environ['TF_CPP_MIN_LOG_LEVEL'] = '2' physical_gpu_list = tf.config.list_physical_devices('GPU') if physical_gpu_list: try: for gpu_device in physical_gpu_list: # 開啟顯存動(dòng)態(tài)增長(zhǎng)模式 tf.config.experimental.set_memory_growth(gpu_device, True) logging.info("GPU 硬件加速就緒:檢測(cè)到 {0} 個(gè)計(jì)算單元,已啟用動(dòng)態(tài)顯存模式。".format(len(physical_gpu_list))) except RuntimeError as hardware_error: logging.warning("GPU 后端配置失?。赡芤驯徽加茫? %s", hardware_error) else: logging.info("未檢測(cè)到 GPU,系統(tǒng)將使用 CPU 進(jìn)行計(jì)算(訓(xùn)練速度可能受限)。") # 執(zhí)行全局初始化 GPUConfig.init_tf() # ============================================================================= # 2. 數(shù)據(jù)工程模塊 (ETL - Extract, Transform, Load) # ============================================================================= class CWRULoader: """ CWRU 數(shù)據(jù)集解析器:負(fù)責(zé)原始 .mat 振動(dòng)信號(hào)的讀取、分段與特征重構(gòu)。 """ def __init__(self, dataset_root, window_size=1024): """ :param dataset_root: 數(shù)據(jù)集存儲(chǔ)根目錄 :param window_size: 樣本長(zhǎng)度(窗口步長(zhǎng),通常設(shè)為 1024 或 2048) """ self.base_directory = os.path.abspath(dataset_root) self.sample_length = window_size self.sampling_interval = window_size def _parse_mat_content(self, target_file): """ 從 MATLAB 容器中提取驅(qū)動(dòng)端(DE)時(shí)間序列數(shù)據(jù)。 """ try: storage = sio.loadmat(target_file) for identifier in storage.keys(): # 匹配驅(qū)動(dòng)端加速度計(jì)信號(hào)鍵名 if 'DE_time' in identifier: return storage[identifier].flatten() except Exception as parse_error: logging.debug("讀取文件 %s 異常: %s", target_file, parse_error) return None return None def load_data(self, category_dictionary): """ 構(gòu)建訓(xùn)練數(shù)據(jù)集。 :param category_dictionary: 標(biāo)簽與文件名的映射關(guān)系字典。 :return: (X_data, y_label) 的 Numpy 數(shù)組。 """ feature_collection, label_collection = [], [] is_data_found = False for class_idx, name_list in category_dictionary.items(): for filename in name_list: full_path = os.path.join(self.base_directory, "{0}.mat".format(filename)) if not os.path.exists(full_path): continue vibration_series = self._parse_mat_content(full_path) if vibration_series is None: continue is_data_found = True # 非重疊滑動(dòng)窗口采樣:將長(zhǎng)序列切割為定長(zhǎng)的樣本塊 for pointer in range(0, len(vibration_series) - self.sample_length + 1, self.sampling_interval): sub_sequence = vibration_series[pointer : pointer + self.sample_length] feature_collection.append(sub_sequence) label_collection.append(class_idx) if not is_data_found: raise FileNotFoundError("路徑下未找到 CWRU 相關(guān) .mat 文件,請(qǐng)檢查路徑。") return np.array(feature_collection, dtype='float32'), np.array(label_collection, dtype='int32') def add_awgn(signal_input, snr_value): """ 加性高斯白噪聲 (AWGN) 注入模塊。 用于模擬真實(shí)工業(yè)場(chǎng)景下的背景噪聲,測(cè)試模型的魯棒性。 計(jì)算公式:P_noise = P_signal / 10^(SNR/10) """ signal_input = np.array(signal_input) random_engine = np.random.default_rng() # 支持固定 SNR 或 SNR 范圍隨機(jī)采樣 target_snr = snr_value if not isinstance(snr_value, (list, tuple)) else random_engine.uniform(snr_value[0], snr_value[1]) # 計(jì)算信號(hào)功率并推導(dǎo)噪聲標(biāo)準(zhǔn)差 signal_power = np.mean(np.square(signal_input), axis=1, keepdims=True) noise_variance = signal_power / (10 ** (target_snr / 10.0)) noise_component = random_engine.normal(0, np.sqrt(noise_variance), signal_input.shape) return (signal_input + noise_component).astype('float32') # ============================================================================= # 3. 神經(jīng)網(wǎng)絡(luò)組件定義 (DRSN Core) # ============================================================================= class SoftThresholdOperator(layers.Layer): """ 軟閾值化算子 (Custom Layer): DRSN 的非線性核心,通過閾值 tau 對(duì)特征映射進(jìn)行收縮處理。 公式:y = sign(x) * max(|x| - tau, 0) """ def __init__(self, **kwargs): super(SoftThresholdOperator, self).__init__(**kwargs) def call(self, inputs): """ x_conv: 輸入特征圖 (Batch, Steps, Channels) tau: 學(xué)習(xí)到的閾值 (Batch, Channels) """ x_conv, tau = inputs # 將閾值擴(kuò)展至與特征圖空間維度匹配 expanded_tau = tf.expand_dims(tau, axis=1) return tf.sign(x_conv) * tf.maximum(tf.abs(x_conv) - expanded_tau, 0.0) class RSBU_CW(layers.Layer): """ 殘差收縮構(gòu)建塊 (Residual Shrinkage Building Unit with Channel-wise thresholds): 集成了多通道注意力機(jī)制的殘差塊,能夠?yàn)槊總€(gè)通道獨(dú)立生成閾值。 """ def __init__(self, filters, kernel_size, strides=1, **kwargs): super(RSBU_CW, self).__init__(**kwargs) self.num_kernels = filters self.step_size = strides self.width = kernel_size self.weight_decay = regularizers.l2(1e-4) # 恒等映射路徑 (Residual Shortcut) self.shortcut = None # 主變換分支:采用經(jīng)典的 BN-ReLU-Conv 結(jié)構(gòu) self.bn_alpha = layers.BatchNormalization() self.relu_alpha = layers.Activation('relu') self.conv_alpha = layers.Conv1D(filters, kernel_size, strides=strides, padding='same', kernel_initializer='he_normal', kernel_regularizer=self.weight_decay) self.bn_beta = layers.BatchNormalization() self.relu_beta = layers.Activation('relu') self.conv_beta = layers.Conv1D(filters, kernel_size, strides=1, padding='same', kernel_initializer='he_normal', kernel_regularizer=self.weight_decay) # 注意力子網(wǎng)絡(luò):計(jì)算通道級(jí)收縮閾值 self.gap = layers.GlobalAveragePooling1D() self.fc1 = layers.Dense(filters, kernel_initializer='he_normal') self.bn_gamma = layers.BatchNormalization() self.relu_gamma = layers.Activation('relu') self.fc2 = layers.Dense(filters, activation='sigmoid') # 歸一化縮放因子 self.threshold_op = SoftThresholdOperator() def build(self, input_dim): """ 動(dòng)態(tài)調(diào)整 Shortcut:當(dāng)步長(zhǎng)不為 1 或通道數(shù)變化時(shí),使用 1x1 卷積對(duì)齊殘差。 """ if self.step_size != 1 or input_dim[-1] != self.num_kernels: self.shortcut = tf.keras.Sequential([ layers.Conv1D(self.num_kernels, 1, strides=self.step_size, padding='same', use_bias=False), layers.BatchNormalization() ]) super(RSBU_CW, self).build(input_dim) def call(self, layer_inputs): """ 邏輯流:特征提取 -> 通道全局特征感知 -> 動(dòng)態(tài)閾值計(jì)算 -> 軟閾值降噪 -> 殘差相加 """ identity = layer_inputs if self.shortcut: identity = self.shortcut(layer_inputs) # 兩次卷積處理得到中間特征圖 x_conv x_conv = self.bn_alpha(layer_inputs) x_conv = self.relu_alpha(x_conv) x_conv = self.conv_alpha(x_conv) x_conv = self.bn_beta(x_conv) x_conv = self.relu_beta(x_conv) x_conv = self.conv_beta(x_conv) # 計(jì)算特征圖各通道的絕對(duì)值均值作為全局統(tǒng)計(jì)量 x_abs = tf.abs(x_conv) abs_mean = self.gap(x_abs) # 通過子網(wǎng)絡(luò)輸出 alpha (0,1),閾值 tau = alpha * abs_mean z = self.fc1(abs_mean) z = self.bn_gamma(z) z = self.relu_gamma(z) alpha = self.fc2(z) tau = tf.multiply(alpha, abs_mean) # 應(yīng)用軟閾值收縮并進(jìn)行殘差融合 denoised_output = self.threshold_op([x_conv, tau]) return layers.Add()([denoised_output, identity]) class DRSN_CW(Model): """ DRSN-CW 完整架構(gòu): 將多個(gè) RSBU 模塊順序堆疊,最后通過全連接層進(jìn)行故障分類。 """ def __init__(self, num_classes): super(DRSN_CW, self).__init__(name="Bearing_Fault_DRSN") self.weight_decay = regularizers.l2(1e-4) # 輸入層:初步感知一維時(shí)序信號(hào) self.conv1 = layers.Conv1D(32, 15, strides=2, padding='same', kernel_initializer='he_normal', kernel_regularizer=self.weight_decay) self.bn1 = layers.BatchNormalization() self.relu1 = layers.Activation('relu') # 構(gòu)建收縮殘差塊序列 (特征維度由 32 逐漸擴(kuò)展至 128) self.rsbu_blocks = [ RSBU_CW(32, 5, strides=2), RSBU_CW(32, 5, strides=1), RSBU_CW(64, 5, strides=2), RSBU_CW(64, 5, strides=1), RSBU_CW(128, 5, strides=2), RSBU_CW(128, 5, strides=1) ] # 輸出頭:降維后映射至分類空間 self.post_norm = layers.BatchNormalization() self.post_relu = layers.Activation('relu') self.gap_layer = layers.GlobalAveragePooling1D() self.classifier = layers.Dense(num_classes, activation='softmax', kernel_regularizer=self.weight_decay) def call(self, network_input): """ 端到端正向推理流程。 """ x = self.conv1(network_input) x = self.bn1(x) x = self.relu1(x) for block in self.rsbu_blocks: x = block(x) x = self.post_norm(x) x = self.post_relu(x) x = self.gap_layer(x) return self.classifier(x) # ============================================================================= # 4. 訓(xùn)練、增強(qiáng)與性能評(píng)估流 # ============================================================================= def train_and_test(dataset_path, seq_len=1024): """ 全流程控制器:涵蓋數(shù)據(jù)預(yù)處理、在線增強(qiáng)、模型訓(xùn)練及極端環(huán)境(-8dB)評(píng)估。 """ # 定義故障類別(基于 CWRU 文件命名規(guī)則) label_map = { 0: ['Normal_0', 'Normal_1', 'Normal_2', 'Normal_3'], 1: ['IR007_0', 'IR007_1', 'IR007_2', 'IR007_3'], 2: ['IR014_0', 'IR014_1', 'IR014_2', 'IR014_3'], 3: ['IR021_0', 'IR021_1', 'IR021_2', 'IR021_3'], 4: ['B007_0', 'B007_1', 'B007_2', 'B007_3'], 5: ['B014_0', 'B014_1', 'B014_2', 'B014_3'], 6: ['B021_0', 'B021_1', 'B021_2', 'B021_3'], 7: ['OR007@6_0', 'OR007@6_1', 'OR007@6_2', 'OR007@6_3'], 8: ['OR014@6_0', 'OR014@6_1', 'OR014@6_2', 'OR014@6_3'], 9: ['OR021@6_0', 'OR021@6_1', 'OR021@6_2', 'OR021@6_3'] } data_engine = CWRULoader(dataset_root=dataset_path, window_size=seq_len) try: signals, labels = data_engine.load_data(label_map) except Exception as data_err: logging.error("數(shù)據(jù)加載失敗: %s", data_err) return # 隨機(jī)劃分:70% 訓(xùn)練,15% 驗(yàn)證,15% 測(cè)試 train_x_pre, temp_x, train_y_pre, temp_y = split_data( signals, labels, test_size=0.3, random_state=42 ) val_x_pre, test_x_pre, val_y_pre, test_y_pre = split_data( temp_x, temp_y, test_size=0.5, random_state=42 ) # 標(biāo)準(zhǔn)化處理:使用訓(xùn)練集均值和標(biāo)準(zhǔn)差,防止測(cè)試信息泄露 mu, sigma = np.mean(train_x_pre), np.std(train_x_pre) def normalize(obs): return ((obs - mu) / sigma).reshape(-1, seq_len, 1) train_set_x = normalize(train_x_pre) val_set_x = normalize(val_x_pre) test_set_x = normalize(test_x_pre) # 標(biāo)簽進(jìn)行 One-hot 編碼 num_classes = len(label_map) train_set_y = tf.keras.utils.to_categorical(train_y_pre, num_classes).astype('float32') val_set_y = tf.keras.utils.to_categorical(val_y_pre, num_classes).astype('float32') test_set_y = tf.keras.utils.to_categorical(test_y_pre, num_classes).astype('float32') # 測(cè)試環(huán)境:注入極強(qiáng)噪聲(-8dB)以驗(yàn)證模型在極端工業(yè)背景下的表現(xiàn) val_x_awgn = add_awgn(val_set_x, snr_value=-8) test_x_awgn = add_awgn(test_set_x, snr_value=-8) def augment_batch(feat_batch, label_batch): """ 在線實(shí)時(shí)增強(qiáng) (Online Data Augmentation): 1. 循環(huán)移位:模擬采樣時(shí)刻的不確定性。 2. 瞬態(tài)沖擊:模擬偶然出現(xiàn)的機(jī)器磕碰聲。 3. 混合噪聲:提升模型的抗噪閾值。 """ rand_gen = np.random.default_rng() augmented_x = feat_batch.copy() batch_n, steps_n, _ = augmented_x.shape # 隨機(jī)相位平移 for sample_idx in range(batch_n): offset = rand_gen.integers(0, steps_n) augmented_x[sample_idx, :, 0] = np.roll(augmented_x[sample_idx, :, 0], offset) # 脈沖沖擊噪聲注入 (10% 概率) if rand_gen.random() > 0.9: for sample_idx in range(batch_n): if rand_gen.random() > 0.5: num_spikes = rand_gen.integers(1, 3) positions = rand_gen.integers(0, steps_n, num_spikes) spike_mag = np.std(augmented_x[sample_idx]) * rand_gen.uniform(1.5, 2.5) augmented_x[sample_idx, positions, 0] += spike_mag * rand_gen.choice([-1, 1], size=num_spikes) # 動(dòng)態(tài) SNR 混合 (50% 概率) if rand_gen.random() > 0.5: augmented_x = add_awgn(augmented_x, snr_value=(-8, 8)) return augmented_x.astype(np.float32), label_batch.astype(np.float32) def _tensor_spec_binding(f_tensor, l_tensor): """ 為 tf.data 顯式綁定形狀信息 """ f_tensor.set_shape([None, seq_len, 1]) l_tensor.set_shape([None, num_classes]) return f_tensor, l_tensor # 利用 tf.data 構(gòu)建高吞吐數(shù)據(jù)流水線 training_pipeline = tf.data.Dataset.from_tensor_slices((train_set_x.astype('float32'), train_set_y)) training_pipeline = training_pipeline.shuffle(len(train_set_x)).batch(64) training_pipeline = training_pipeline.map( lambda x, y: tf.numpy_function(augment_batch, [x, y], [tf.float32, tf.float32]), num_parallel_calls=tf.data.AUTOTUNE ).map(_tensor_spec_binding).prefetch(tf.data.AUTOTUNE) # 模型實(shí)例化與編譯 model_instance = DRSN_CW(num_classes=num_classes) model_instance.compile( optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3), loss=tf.keras.losses.CategoricalCrossentropy(label_smoothing=0.0), # 交叉熵?fù)p失 metrics=['accuracy'] ) logging.info("診斷系統(tǒng)啟動(dòng):分類數(shù)=%d, 序列長(zhǎng)度=%d", num_classes, seq_len) # 動(dòng)態(tài)學(xué)習(xí)率調(diào)整與早停保護(hù) optimization_callbacks = [ tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', factor=0.5, patience=7, min_lr=1e-6, verbose=1), tf.keras.callbacks.EarlyStopping(monitor='val_loss', patience=20, restore_best_weights=True) ] # 模型擬合 model_instance.fit( training_pipeline, epochs=100, validation_data=(val_x_awgn, val_set_y), callbacks=optimization_callbacks, verbose=2 ) # 在極低信噪比環(huán)境下最終驗(yàn)證性能 final_loss, final_acc = model_instance.evaluate(test_x_awgn, test_set_y, verbose=0) print("n" + "="*50) print("模型評(píng)估報(bào)告 (DRSN-CW)") print("評(píng)估背景:-8dB SNR (強(qiáng)噪聲干擾環(huán)境)") print("最終識(shí)別準(zhǔn)確率: {0:.2f}%".format(final_acc * 100)) print("="*50) # ============================================================================= # 程序入口 # ============================================================================= if __name__ == "__main__": # 配置默認(rèn)的數(shù)據(jù)搜索目錄 DATA_PATH = os.path.join(os.getcwd(), 'data_path') if not os.path.exists(DATA_PATH): logging.warning("未找到默認(rèn)數(shù)據(jù)目錄: %s", DATA_PATH) user_input_path = input("請(qǐng)輸入 CWRU 原始數(shù)據(jù)集 (.mat) 所在的完整路徑: ").strip() if user_input_path: DATA_PATH = user_input_path else: logging.critical("未提供有效路徑,程序退出。") sys.exit(0) # 啟動(dòng)訓(xùn)練與測(cè)試 train_and_test(DATA_PATH, seq_len=1024)
3.強(qiáng)噪聲環(huán)境下的診斷性能分析與復(fù)現(xiàn)總結(jié)
在復(fù)現(xiàn)實(shí)驗(yàn)中,使用了Adam優(yōu)化器進(jìn)行訓(xùn)練,并結(jié)合了學(xué)習(xí)率動(dòng)態(tài)調(diào)整 (ReduceLROnPlateau) 策略。在注入了-8dB的高斯噪聲后,DRSN-CW依然保持了90%以上的測(cè)試準(zhǔn)確率。
圖3. 實(shí)驗(yàn)結(jié)果
論文原文:
論文標(biāo)題: Deep residual shrinkage networks for fault diagnosis
出版期刊: IEEE Transactions on Industrial Informatics. 2020, 16(7): 4681-4690.
DOI: 10.1109/TII.2019.2943898
https://ieeexplore.ieee.org/document/8850096
審核編輯 黃宇
-
神經(jīng)網(wǎng)絡(luò)
+關(guān)注
關(guān)注
42文章
4838瀏覽量
107845 -
python
+關(guān)注
關(guān)注
57文章
4877瀏覽量
90078
發(fā)布評(píng)論請(qǐng)先 登錄
面向嵌入式部署的神經(jīng)網(wǎng)絡(luò)優(yōu)化:模型壓縮深度解析
神經(jīng)網(wǎng)絡(luò)的初步認(rèn)識(shí)
自動(dòng)駕駛中常提的卷積神經(jīng)網(wǎng)絡(luò)是個(gè)啥?
NMSIS神經(jīng)網(wǎng)絡(luò)庫(kù)使用介紹
在Ubuntu20.04系統(tǒng)中訓(xùn)練神經(jīng)網(wǎng)絡(luò)模型的一些經(jīng)驗(yàn)
CICC2033神經(jīng)網(wǎng)絡(luò)部署相關(guān)操作
液態(tài)神經(jīng)網(wǎng)絡(luò)(LNN):時(shí)間連續(xù)性與動(dòng)態(tài)適應(yīng)性的神經(jīng)網(wǎng)絡(luò)
神經(jīng)網(wǎng)絡(luò)的并行計(jì)算與加速技術(shù)
如何在機(jī)器視覺中部署深度學(xué)習(xí)神經(jīng)網(wǎng)絡(luò)
無刷電機(jī)小波神經(jīng)網(wǎng)絡(luò)轉(zhuǎn)子位置檢測(cè)方法的研究
神經(jīng)網(wǎng)絡(luò)專家系統(tǒng)在電機(jī)故障診斷中的應(yīng)用
神經(jīng)網(wǎng)絡(luò)RAS在異步電機(jī)轉(zhuǎn)速估計(jì)中的仿真研究
革命性神經(jīng)形態(tài)微控制器 ?**Pulsar**? 的深度技術(shù)解讀
基于FPGA搭建神經(jīng)網(wǎng)絡(luò)的步驟解析
面向強(qiáng)噪聲數(shù)據(jù)的深度神經(jīng)網(wǎng)絡(luò):深度殘差收縮網(wǎng)絡(luò)的Python編程復(fù)現(xiàn)
評(píng)論