Do you want to publish a course? Click here

NLP models are vulnerable to data poisoning attacks. One type of attack can plant a backdoor in a model by injecting poisoned examples in training, causing the victim model to misclassify test instances which include a specific pattern. Although defe nces exist to counter these attacks, they are specific to an attack type or pattern. In this paper, we propose a generic defence mechanism by making the training process robust to poisoning attacks through gradient shaping methods, based on differentially private training. We show that our method is highly effective in mitigating, or even eliminating, poisoning attacks on text classification, with only a small cost in predictive accuracy.
We propose the first general-purpose gradient-based adversarial attack against transformer models. Instead of searching for a single adversarial example, we search for a distribution of adversarial examples parameterized by a continuous-valued matrix , hence enabling gradient-based optimization. We empirically demonstrate that our white-box attack attains state-of-the-art attack performance on a variety of natural language tasks, outperforming prior work in terms of adversarial success rate with matching imperceptibility as per automated and human evaluation. Furthermore, we show that a powerful black-box transfer attack, enabled by sampling from the adversarial distribution, matches or exceeds existing methods, while only requiring hard-label outputs.
Recent work has demonstrated the vulnerability of modern text classifiers to universal adversarial attacks, which are input-agnostic sequences of words added to text processed by classifiers. Despite being successful, the word sequences produced in s uch attacks are often ungrammatical and can be easily distinguished from natural text. We develop adversarial attacks that appear closer to natural English phrases and yet confuse classification systems when added to benign inputs. We leverage an adversarially regularized autoencoder (ARAE) to generate triggers and propose a gradient-based search that aims to maximize the downstream classifier's prediction loss. Our attacks effectively reduce model accuracy on classification tasks while being less identifiable than prior models as per automatic detection metrics and human-subject studies. Our aim is to demonstrate that adversarial attacks can be made harder to detect than previously thought and to enable the development of appropriate defenses.
Wireless Sensor Networks (WSNs) are deployed in adversarial environments and used for critical applications such as battle field surveillance and medical monitoring, then security weaknesses become a big concern. The severe resource constraints of WSNs give rise to the need for resource bound security solutions. The Implicit Geographic Forwarding Protocol (IGF) is considered stateless, which means that it does not contain any routing tables and does not depend on the knowledge of the network topology, or on the presence or absence of the node in WSN. This protocol is developed to provide a range of mechanisms that increase security in IGF. Thus it keeps the dynamic connectivity features and provides effective defenses against potential attacks. These mechanisms supported the security against several attacks as Black hole, Sybil and Retransmission attacks, but the problem was the inability of mechanisms to deal with physical attack. This research deals with a detailed study of the SIGF-2 protocol and proposes an improvement for it, in which we use the concept of deployment knowledge from random key pool algorithm of keys management to defend against physical attack . The evaluation of simulation results, with different parameters, proved that our proposal had improved the studied protocol.
After September 11th attacks, the USA launched its so called war on terrorism using a number of legal and illegal means. Governmental and non-governmental reports of international organizations have indicated that American military bases have been used as incarceration and military court sites. Guantanamo is one of the most important American military base used in the war against terrorism. Other military bases throughout the world, such as Bagram base in Afghanistan, have been used for activities which exceed their traditional military objectives. These bases have experienced violations of human rights and international humanitarian laws. Military bases have been converted into incarceration and torture centerswhich witness serious violations of the international rights of prisoners. Prisoners have been deprived of their simplest legitimate rights. Such violations have been criticized by a number of regional and international organizations, which pressured the American government to announce its decision to close Guantanamo.
The security of several recently proposed ciphers relies on the fact:" that the classical methods of cryptanalysis (e.g. linear or differential attacks) are based on probabilistic characteristics, which makes their security grow exponentially with the number of rounds". So they haven’t the suitable immunity against the algebraic attacks which becomes more powerful after XSL algorithm. in this research we will try some method to increase the immunity of AES algorithm against the algebraic attacks then we will study the effect of this adjustment.
mircosoft-partner

هل ترغب بارسال اشعارات عن اخر التحديثات في شمرا-اكاديميا