Yam Code
Sign up
Login
New paste
Home
Trending
Archive
English
English
Tiếng Việt
भारत
Sign up
Login
New Paste
Browse
With the scenario of limited labeled dataset, this paper introduces a deep learning-based approach that leverages Diabetic Retinopathy (DR) severity recognition performance using fundus images combined with wide-field swept-source optical coherence tomography angiography (SS-OCTA). The proposed architecture comprises a backbone convolutional network associated with a Twofold Feature Augmentation mechanism, namely TFA-Net. The former includes multiple convolution blocks extracting representational features at various scales. The latter is constructed in a two-stage manner, i.e., the utilization of weight-sharing convolution kernels and the deployment of a Reverse Cross-Attention (RCA) stream. The proposed model achieves a Quadratic Weighted Kappa rate of 90.2% on the small-sized internal KHUMC dataset. The robustness of the RCA stream is also evaluated by the single-modal Messidor dataset, of which the obtained mean Accuracy (94.8%) and Area Under Receiver Operating Characteristic (99.4%) outperform those of the state-of-the-arts significantly. Utilizing a network strongly regularized at feature space to learn the amalgamation of different modalities is of proven effectiveness. Thanks to the widespread availability of multi-modal retinal imaging for each diabetes patient nowadays, such approach can reduce the heavy reliance on large quantity of labeled visual data. Our TFA-Net is able to coordinate hybrid information of fundus photos and wide-field SS-OCTA for exhaustively exploiting DR-oriented biomarkers. Moreover, the embedded feature-wise augmentation scheme can enrich generalization ability efficiently despite learning from small-scale labeled data. Our TFA-Net is able to coordinate hybrid information of fundus photos and wide-field SS-OCTA for exhaustively exploiting DR-oriented biomarkers. Moreover, the embedded feature-wise augmentation scheme can enrich generalization ability efficiently despite learning from small-scale labeled data.Shoulder exoskeletons potentially reduce overuse injuries in industrial settings including overhead work or lifting tasks. Previous studies evaluated these devices primarily in laboratory setting, but evidence of their effectiveness outside the lab is lacking. The present study aimed to evaluate the effectiveness of two passive shoulder exoskeletons and explore the transfer of laboratory-based results to the field. https://www.selleckchem.com/btk.html Four industrial workers performed controlled and in-field evaluations without and with two exoskeletons, ShoulderX and Skelex in a randomized order. The exoskeletons decreased upper trapezius activity (up to 46%) and heart rate in isolated tasks. In the field, the effects of both exoskeletons were less prominent (up to 26% upper trapezius activity reduction) while lifting windscreens weighing 13.1 and 17.0 kg. ShoulderX received high discomfort scores in the shoulder region and usability of both exoskeletons was moderate. Overall, both exoskeletons positively affected the isolated tasks, but in the field the support of both exoskeletons was limited. Skelex, which performed worse in the isolated tasks compared to ShoulderX, seemed to provide the most support during the in-field situations. Exoskeleton interface improvements are required to improve comfort and usability. Laboratory-based evaluations of exoskeletons should be interpreted with caution, since the effect of an exoskeleton is task specific and not all in-field situations with high-level lifting will equally benefit from the use of an exoskeleton. Before considering passive exoskeleton implementation, we recommend analyzing joint angles in the field, because the support is inherently dependent on these angles, and to perform in-field pilot tests. This paper is the first thorough evaluation of two shoulder exoskeletons in a controlled and in-field situation.We propose a novel asymmetric image compression system of light l∞ -constrained predictive encoding and heavy-duty CNN-based soft decoding. The system achieves superior rate-distortion performances over the best of existing image compression methods, including BPG, WebP, FLIF and recent CNN codecs, in both l2 and l∞ error metrics, for bit rates near or above the threshold of perceptually transparent reconstruction. These remarkable coding gains are made by deep learning for compression artifact removal. A restoration CNN is designed to map a lossy compressed image to its original. Its unique strength is to enforce a tight error bound on a per pixel basis. As such, no small distinctive structures of the original image can be dropped or distorted, even if they are statistical outliers that are otherwise sacrificed by mainstream CNN restoration methods.We introduce a novel and generic convolutional unit, DiCE unit, that is built using dimension-wise convolutions and dimension-wise fusion. The dimension-wise convolutions apply lightweight convolutional filtering across each dimension of the input tensor while dimension-wise fusion efficiently combines these dimension-wise representations; allowing the DiCE unit to efficiently encode spatial and channel-wise information contained in the input tensor. The DiCE unit is simple and can be seamlessly integrated with any architecture to improve its efficiency and performance. Compared to depth-wise separable convolutions, the DiCE unit shows significant improvements across different architectures. When DiCE units are stacked to build the DiCENet model, we observe significant improvements over state-of-the-art models across various computer vision tasks including image classification, object detection, and semantic segmentation. On the ImageNet dataset, the DiCENet delivers 2-4% higher accuracy than state-of-the-art manually designed models (e.g., MobileNetv2 and ShuffleNetv2). Also, DiCENet generalizes better to tasks (e.g., object detection) that are often used in resource-constrained devices in comparison to state-of-the-art separable convolution-based efficient networks, including neural search-based methods (e.g., MobileNetv3 and MixNet). The current standard of care for peripheral chronic total occlusions involves the manual routing of a guidewire under fluoroscopy. Despite significant improvements in recent decades, navigation remains clinically challenging with high rates of procedural failure and iatrogenic injury. To address this challenge, we present a proof-of-concept robotic guidewire system with forward-viewing ultrasound imaging to allow visualization and maneuverability through complex vasculature. A 0.035" guidewire-specific ultrasound transducer with matching layer and acoustic backing was designed, fabricated, and characterized. The effect of guidewire motion on signal decorrelation was assessed with simulations and experimentally, driving the development of a synthetic aperture beamforming approach to form images as the transducer is steered on the robotic guidewire. System performance was evaluated by imaging wire targets in water. Finally, proof-of-concept was demonstrated by imaging an ex vivo artery. The designed custom transducer was fabricated with a center frequency of 15.
Paste Settings
Paste Title :
[Optional]
Paste Folder :
[Optional]
Select
Syntax Highlighting :
[Optional]
Select
Markup
CSS
JavaScript
Bash
C
C#
C++
Java
JSON
Lua
Plaintext
C-like
ABAP
ActionScript
Ada
Apache Configuration
APL
AppleScript
Arduino
ARFF
AsciiDoc
6502 Assembly
ASP.NET (C#)
AutoHotKey
AutoIt
Basic
Batch
Bison
Brainfuck
Bro
CoffeeScript
Clojure
Crystal
Content-Security-Policy
CSS Extras
D
Dart
Diff
Django/Jinja2
Docker
Eiffel
Elixir
Elm
ERB
Erlang
F#
Flow
Fortran
GEDCOM
Gherkin
Git
GLSL
GameMaker Language
Go
GraphQL
Groovy
Haml
Handlebars
Haskell
Haxe
HTTP
HTTP Public-Key-Pins
HTTP Strict-Transport-Security
IchigoJam
Icon
Inform 7
INI
IO
J
Jolie
Julia
Keyman
Kotlin
LaTeX
Less
Liquid
Lisp
LiveScript
LOLCODE
Makefile
Markdown
Markup templating
MATLAB
MEL
Mizar
Monkey
N4JS
NASM
nginx
Nim
Nix
NSIS
Objective-C
OCaml
OpenCL
Oz
PARI/GP
Parser
Pascal
Perl
PHP
PHP Extras
PL/SQL
PowerShell
Processing
Prolog
.properties
Protocol Buffers
Pug
Puppet
Pure
Python
Q (kdb+ database)
Qore
R
React JSX
React TSX
Ren'py
Reason
reST (reStructuredText)
Rip
Roboconf
Ruby
Rust
SAS
Sass (Sass)
Sass (Scss)
Scala
Scheme
Smalltalk
Smarty
SQL
Soy (Closure Template)
Stylus
Swift
TAP
Tcl
Textile
Template Toolkit 2
Twig
TypeScript
VB.Net
Velocity
Verilog
VHDL
vim
Visual Basic
WebAssembly
Wiki markup
Xeora
Xojo (REALbasic)
XQuery
YAML
HTML
Paste Expiration :
[Optional]
Never
Self Destroy
10 Minutes
1 Hour
1 Day
1 Week
2 Weeks
1 Month
6 Months
1 Year
Paste Status :
[Optional]
Public
Unlisted
Private (members only)
Password :
[Optional]
Description:
[Optional]
Tags:
[Optional]
Encrypt Paste
(
?
)
Create New Paste
You are currently not logged in, this means you can not edit or delete anything you paste.
Sign Up
or
Login
Site Languages
×
English
Tiếng Việt
भारत