Yam Code
Sign up
Login
New paste
Home
Trending
Archive
English
English
Tiếng Việt
भारत
Sign up
Login
New Paste
Browse
Domain adaptation aims to reduce the mismatch between the source and target domains. A domain adversarial network (DAN) has been recently proposed to incorporate adversarial learning into deep neural networks to create a domain-invariant space. However, DAN's major drawback is that it is difficult to find the domain-invariant space by using a single feature extractor. In this article, we propose to split the feature extractor into two contrastive branches, with one branch delegating for the class-dependence in the latent space and another branch focusing on domain-invariance. The feature extractor achieves these contrastive goals by sharing the first and last hidden layers but possessing decoupled branches in the middle hidden layers. For encouraging the feature extractor to produce class-discriminative embedded features, the label predictor is adversarially trained to produce equal posterior probabilities across all of the outputs instead of producing one-hot outputs. We refer to the resulting domain adaptation network as ``contrastive adversarial domain adaptation network (CADAN).'' We evaluated the embedded features' domain-invariance via a series of speaker identification experiments under both clean and noisy conditions. Results demonstrate that the embedded features produced by CADAN lead to a 33% improvement in speaker identification accuracy compared with the conventional DAN.Recurrent neural networks (RNNs) can remember temporal contextual information over various time steps. The well-known gradient vanishing/explosion problem restricts the ability of RNNs to learn long-term dependencies. https://www.selleckchem.com/products/s64315-mik665.html The gate mechanism is a well-developed method for learning long-term dependencies in long short-term memory (LSTM) models and their variants. These models usually take the multiplication terms as gates to control the input and output of RNNs during forwarding computation and to ensure a constant error flow during training. In this article, we propose the use of subtraction terms as another type of gates to learn long-term dependencies. Specifically, the multiplication gates are replaced by subtraction gates, and the activations of RNNs input and output are directly controlled by subtracting the subtrahend terms. The error flows remain constant, as the linear identity connection is retained during training. The proposed subtraction gates have more flexible options of internal activation functions than the multiplication gates of LSTM. The experimental results using the proposed Subtraction RNN (SRNN) indicate comparable performances to LSTM and gated recurrent unit in the Embedded Reber Grammar, Penn Tree Bank, and Pixel-by-Pixel MNIST experiments. To achieve these results, the SRNN requires approximate three-quarters of the parameters used by LSTM. We also show that a hybrid model combining multiplication forget gates and subtraction gates could achieve good performance.Autonomous driving is of great interest to industry and academia alike. The use of machine learning approaches for autonomous driving has long been studied, but mostly in the context of perception. In this article, we take a deeper look on the so-called end-to-end approaches for autonomous driving, where the entire driving pipeline is replaced with a single neural network. We review the learning methods, input and output modalities, network architectures, and evaluation schemes in end-to-end driving literature. Interpretability and safety are discussed separately, as they remain challenging for this approach. Beyond providing a comprehensive overview of existing methods, we conclude the review with an architecture that combines the most promising elements of the end-to-end autonomous driving systems.To meet the increasing demand for denser integrated circuits, feedforward control plays an important role in the achievement of high servo performance of wafer stages. The preexisting feedforward control methods, however, are subject to either inflexibility to reference variations or poor robustness. In this article, these deficiencies are removed by a novel variable-gain iterative feedforward tuning (VGIFFT) method. The proposed VGIFFT method attains 1) no involvement of any parametric model through data-driven estimation; 2) high performance regardless of reference variations through feedforward parameterization; and 3) especially high robustness against stochastic disturbance as well as against model uncertainty through a variable learning gain. What is more, the tradeoff in which preexisting methods are subject to between fast convergence and high robustness is broken through by VGIFFT. Experimental results validate the proposed method and confirm its effectiveness and enhanced performance.Battery-less and ultra-low-power implantable medical devices (IMDs) with minimal invasiveness are the latest therapeutic paradigm. This work presents a 13.56-MHz inductive power receiver system-on-a-chip with an input sensitivity of -25.4 dBm (2.88 μW) and an efficiency of 46.4% while driving a light load of 30 μW. In particular, a real-time resonance compensation scheme is proposed to mitigate resonance variations commonly seen in IMDs due to different dielectric environments, loading conditions, and fabrication mismatches, etc. The power-receiving front-end incorporates a 6-bit capacitor bank that is periodically adjusted according to a successive-approximation-resonance-tuning (SART) algorithm. The compensation range is as much as 24 pF and it converges within 12 clock cycles and causes negligible power consumption overhead. The harvested voltage from 1.7 V to 3.3 V is digitized on-chip and transmitted via an ultra-wideband impulse radio (IR-UWB) back-telemetry for closed-loop regulation. The IC is fabricated in 180-nm CMOS process with an overall current dissipation of 750 nA. At a separation distance of 2 cm, the end-to-end power transfer efficiency reaches 16.1% while driving the 30-μW load, which is immune to artificially induced resonance capacitor offsets. The proposed system can be applied to various battery-less IMDs with the potential improvement of the power transfer efficiency on orders of magnitude.
Paste Settings
Paste Title :
[Optional]
Paste Folder :
[Optional]
Select
Syntax Highlighting :
[Optional]
Select
Markup
CSS
JavaScript
Bash
C
C#
C++
Java
JSON
Lua
Plaintext
C-like
ABAP
ActionScript
Ada
Apache Configuration
APL
AppleScript
Arduino
ARFF
AsciiDoc
6502 Assembly
ASP.NET (C#)
AutoHotKey
AutoIt
Basic
Batch
Bison
Brainfuck
Bro
CoffeeScript
Clojure
Crystal
Content-Security-Policy
CSS Extras
D
Dart
Diff
Django/Jinja2
Docker
Eiffel
Elixir
Elm
ERB
Erlang
F#
Flow
Fortran
GEDCOM
Gherkin
Git
GLSL
GameMaker Language
Go
GraphQL
Groovy
Haml
Handlebars
Haskell
Haxe
HTTP
HTTP Public-Key-Pins
HTTP Strict-Transport-Security
IchigoJam
Icon
Inform 7
INI
IO
J
Jolie
Julia
Keyman
Kotlin
LaTeX
Less
Liquid
Lisp
LiveScript
LOLCODE
Makefile
Markdown
Markup templating
MATLAB
MEL
Mizar
Monkey
N4JS
NASM
nginx
Nim
Nix
NSIS
Objective-C
OCaml
OpenCL
Oz
PARI/GP
Parser
Pascal
Perl
PHP
PHP Extras
PL/SQL
PowerShell
Processing
Prolog
.properties
Protocol Buffers
Pug
Puppet
Pure
Python
Q (kdb+ database)
Qore
R
React JSX
React TSX
Ren'py
Reason
reST (reStructuredText)
Rip
Roboconf
Ruby
Rust
SAS
Sass (Sass)
Sass (Scss)
Scala
Scheme
Smalltalk
Smarty
SQL
Soy (Closure Template)
Stylus
Swift
TAP
Tcl
Textile
Template Toolkit 2
Twig
TypeScript
VB.Net
Velocity
Verilog
VHDL
vim
Visual Basic
WebAssembly
Wiki markup
Xeora
Xojo (REALbasic)
XQuery
YAML
HTML
Paste Expiration :
[Optional]
Never
Self Destroy
10 Minutes
1 Hour
1 Day
1 Week
2 Weeks
1 Month
6 Months
1 Year
Paste Status :
[Optional]
Public
Unlisted
Private (members only)
Password :
[Optional]
Description:
[Optional]
Tags:
[Optional]
Encrypt Paste
(
?
)
Create New Paste
You are currently not logged in, this means you can not edit or delete anything you paste.
Sign Up
or
Login
Site Languages
×
English
Tiếng Việt
भारत