网站导航

1336F-BRF75-AE-DE Allen-Bradley罗克韦尔 伺服驱动轴模块 变频器

当前位置:主页 > 产品展示 > 罗克韦尔 Allen Bradley

1336F-BRF75-AE-DE Allen-Bradley罗克韦尔 伺服驱动轴模块 变频器

型号: 1336F-BRF75-AE-DE

分类: 罗克韦尔 Allen Bradley

联系人:何经理

手机:13313705507

QQ:2235954483

邮箱:2235954483@qq.com

地址:厦门市思明区吕岭路1733号万科创想中心2009室

详细介绍

此阶段的目标是获取大量信息来训练 AI 模型。仅原始的、未处理的数据是没有帮助的,因为信息可能包含重复、错误和异常值。在初始阶段预处理收集的数据以识别模式、异常值和缺失信息还允许用户纠正错误和偏差。根据所收集数据的复杂程度,通常用于数据收集的计算平台通常基于 Arm Cortex 或 Intel Atom/Core 处理器。一般来说,I/O 和 CPU 规格(而不是 GPU)对于执行数据收集任务更为重要。

人工智能模型需要在神经网络和资源密集型机器学习或深度学习算法上进行训练,这些算法需要更强大的处理能力,例如强大的 GPU,以支持并行计算,以便分析大量收集和预处理的训练数据。训练 AI 模型涉及选择机器学习模型并根据收集和预处理的数据对其进行训练。在此过程中,还需要评估和调整参数以确保准确性。许多训练模型和工具可供选择,包括现成的深度学习设计框架,例如 PyTorch、TensorFlow 和 Caffe。训练通常在指定的 AI 训练机器或云计算服务上进行,例如 AWS Deep Learning AMI、Amazon SageMaker Autopilot、Google Cloud AI、

1336F-BRF75-AE-DE Allen-Bradley罗克韦尔 伺服驱动轴模块 变频器

 

阶段涉及在边缘计算机上部署经过训练的 AI 模型,以便它可以根据新收集和预处理的数据快速有效地进行推理和预测。由于推理阶段通常比训练消耗更少的计算资源,因此 CPU 或轻量级加速器对于 AIoT 应用程序可能就足够了。

尽管如此,用户将需要一个转换工具来将训练好的模型转换为在专用的边缘处理器/加速器上运行,例如英特尔 OpenVINO 或 NVIDIA CUDA。推理还包括几个不同的边缘计算级别和要求。

边缘计算级别

尽管 AI 训练仍主要在云端或本地服务器上进行,但数据收集和推理必然发生在网络边缘。此外,由于推理是经过训练的 AI 模型完成应用程序目标的大部分工作(即根据新收集的现场数据做出决策或执行操作),因此用户需要确定需要以下哪些级别的边缘计算为了选择合适的处理器。

边缘计算水平低

在边缘和云之间传输数据不仅昂贵,而且耗时并导致延迟。通过低边缘计算,应用程序只需将少量有用数据发送到云端,从而减少延迟时间、带宽、数据传输费用、功耗和硬件成本。无需加速器的基于 Arm 的平台可用于 IIoT 设备来收集和分析数据,以做出快速推断或决策。

中等边缘计算水平

这种推理水平可以处理各种 IP 摄像头流,用于计算机视觉或视频分析,具有足够的处理帧速率。中等边缘计算包括基于 AI 模型和用例性能要求的广泛数据复杂性,例如办公室门禁系统与大规模公共监控网络的面部识别应用程序。大多数工业边缘计算应用还需要考虑有限的功率预算或无风扇设计以进行散热。在此级别上,可以使用高性能 CPU、入门级 GPU 或 VPU。例如,英特尔酷睿 i7 系列 CPU 通过 OpenVINO 工具套件和基于软件的 AI/ML 加速器提供高效的计算机视觉解决方案,可以在边缘执行推理。

高边缘计算水平

边缘计算涉及为使用更复杂模式识别的人工智能专家系统处理更大量的数据,例如公共安全系统中自动视频监控的行为分析,以检测安全事件或潜在威胁事件。高端计算级别推理通常使用加速器,包括高端 GPU、VPU、TPU 或 FPGA,它们消耗更多功率(200 W 或更多)并产生过多热量。

由于必要的功耗和产生的热量可能会超过网络远端的限制,例如在行驶的火车上,因此高边缘计算系统通常部署在近边缘站点(例如火车站)以执行任务.

多种工具可用于各种硬件平台,以帮助加快应用程序开发过程或提高 AI 算法和机器学习的整体性能。

1336F-BRF75-AE-DE Allen-Bradley罗克韦尔 伺服驱动轴模块 变频器


我司产品广泛应用于数控机械 冶金,石油天然气,石油化工,
化工,造纸印刷,纺织印染,机械,电子制造,汽车制造,
塑胶机械,电力,水利,水处理/环保,市政工程,锅炉供暖,能源,输配电。

1336F-BRF75-AE-DE Allen-Bradley罗克韦尔 伺服驱动轴模块 变频器

 

AI models need to be trained on advanced neural networks and resource-hungry machine learning or deep learning algorithms that demand more powerful processing capabilities, such as powerful GPUs, to support parallel computing in order to analyze large amounts of collected and preprocessed training data. Training an AI model involves selecting a machine learning model and training it on collected and preprocessed data. During this process, there is also a need to evaluate and tune the parameters to ensure accuracy. Many training models and tools are available to choose from, including off-the-shelf deep learning design frameworks such as PyTorch, TensorFlow, and Caffe. Training is usually performed on designated AI training machines or cloud computing services, such as AWS Deep Learning AMIs, Amazon SageMaker Autopilot, Google Cloud AI, or Azure Machine Learning, instead of in the field.

Inferencing

The final phase involves deploying the trained AI model on the edge computer so that it can make inferences and predictions based on newly collected and preprocessed data quickly and efficiently. Since the inferencing stage generally consumes fewer computing resources than training, a CPU or lightweight accelerator may be sufficient for the AIoT application.

Nonetheless, users will need a conversion tool to convert the trained model to run on specialized edge processors/accelerators, such as Intel OpenVINO or NVIDIA CUDA. Inferencing also includes several different edge computing levels and requirements.

Edge computing levels

Although AI training is still mainly performed in the cloud or on local servers, data collection and inferencing necessarily take place at the edge of the network. Moreover, since inferencing is where trained AI model does most of the work to accomplish the application objectives (i.e., make decisions or perform actions based on newly collected field data), users need to determine which of the following levels of edge computing are needed in order to choose the appropriate processor.

Low edge computing level

Transferring data between the edge and the cloud is not only expensive, but also time- consuming and results in latency. With low edge computing, applications only send a small amount of useful data to the cloud, which reduces lag time, bandwidth, data transmission fees, power consumption, and hardware costs. An Arm-based platform without accelerators can be used on IIoT devices to collect and analyze data to make quick inferences or decisions.

Medium edge computing level

This level of inference can handle various IP camera streams for computer vision or video analytics with sufficient processing frame rates. Medium edge computing includes a wide range of data complexity based on the AI model and performance requirements of the use case, such as facial recognition applications for an office entry system versus a large-scale public surveillance network. Most industrial edge computing applications also need to factor in a limited power budget or fanless design for heat dissipation. It may be possible to use a high-performance CPU, entry-level GPU, or VPU at this level. For instance, the Intel Core i7 Series CPUs offer an efficient computer vision solution with the OpenVINO toolkit and software based AI/ML accelerators that can perform inference at the edge.

High edge computing level

High edge computing involves processing heavier loads of data for AI expert systems that use more complex pattern recognition, such as behavior analysis for automated video surveillance in public security systems to detect security incidents or potentially threatening events. High Edge Compute Level inferencing generally uses accelerators, including a high-end GPU, VPU, TPU, or FPGA, which consumes more power (200 W or more) and generates excess heat.

Since the necessary power consumption and heat generated may exceed the limits at the far edge of the network, such as aboard a moving train, high edge computing systems are often deployed in near-edge sites, such as in a railway station, to perform tasks.

Several tools are available for various hardware platforms to help speed up the application development process or improve overall performance for AI algorithms and machine learning.


推荐产品

如果您有任何问题,请跟我们联系!

联系我们

Copyright © 2002-2020 厦门雄霸电子商务有限公司 版权所有

闽公网安备 35020302034927号

备案号:闽ICP备14012685号

地址:厦门市思明区吕岭路1733号万科创想中心2009室

在线客服 联系方式 二维码

服务热线

13313705507

扫一扫,关注我们