secondary cluster409.
Disclosure of information constituting a commercial secret is an action or inaction as a result of which information constituting a commercial secret, in any possible form (oral, written, other form, including using technical means) becomes known to third parties without the consent of the owner of such information, or contrary to an employment or civil law contract410.
Discrete feature is a feature with a finite set of possible values. For example, a feature whose values may only be animal, vegetable, or mineral is a discrete (or categorical) feature. Contrast with continuous feature411.
Discrete system is any system with a countable number of states. Discrete systems may be contrasted with continuous systems, which may also be called analog systems. A final discrete system is often modeled with a directed graph and is analyzed for correctness and complexity according to computational theory. Because discrete systems have a countable number of states, they may be described in precise mathematical models. A computer is a finite state machine that may be viewed as a discrete system. Because computers are often used to model not only other discrete systems but continuous systems as well, methods have been developed to represent real-world continuous systems as discrete systems. One such method involves sampling a continuous signal at discrete time intervals412.
Discriminative model is a model that predicts labels from a set of one or more features. More formally, discriminative models define the conditional probability of an output given the features and weights; that is (output|features, weights). For example, a model that predicts whether an email is spam from features and weights is a discriminative model. The vast majority of supervised learning models, including classification and regression models, are discriminative models. Contrast with generative model413.
Discriminator is a system that determines whether examples are real or fake. The subsystem within a generative adversarial network that determines whether the examples created by the generator are real or fake414.
Disparate impact – making decisions about people that impact different population subgroups disproportionately. This usually refers to situations where an algorithmic decision-making process harms or benefits some subgroups more than others415.
Disparate treatment – factoring subjects’ sensitive attributes into an algorithmic decision-making process such that different subgroups of people are treated differently416.
Dissemination of information – actions aimed at obtaining information by an indefinite circle of persons or transferring information to an indefinite circle of persons417.
Dissemination of personal data – actions aimed at disclosing personal data to an indefinite circle of persons418.
Distributed artificial intelligence (DAI) (also decentralized artificial intelligence) is a subfield of artificial intelligence research dedicated to the development of distributed solutions for problems. DAI is closely related to and a predecessor of the field of multi-agent systems419.
Distributed registry technologies (Blockchain) are algorithms and protocols for decentralized storage and processing of transactions structured as a sequence of linked blocks without the possibility of their subsequent change420.
Distribution series are series of absolute and relative numbers that characterize the distribution of population units according to a qualitative (attributive) or quantitative attribute. Distribution series built on a quantitative basis are called variational421.
Divisive clustering – see hierarchical clustering422,423.
Documentation generically, any information on the structure, contents, and layout of a data file. Sometimes called «technical documentation» or «a codebook». Documentation may be considered a specialized form of metadata424.
Documented information – information recorded on a material carrier by means of documentation with details that make it possible to determine such information, or, in cases established by the legislation of the Russian Federation, its material carrier425.
Downsampling – overloaded term that can mean either of the following: Reducing the amount of information in a feature in order to train a model more efficiently. For example, before training an image recognition model, downsampling high-resolution images to a lower-resolution format. Training on a disproportionately low percentage of over-represented class examples in order to improve model training on under-represented classes. For example, in a class-imbalanced dataset, models tend to learn a lot about the majority class and not enough about the minority class. Downsampling helps balance the amount of training on the majority and minority classes426.
Driver is computer software that allows other software (the operating system) to access the hardware of a device427.
Drone – unmanned aerial vehicle (unmanned aerial system)428.
Dropout regularization is a form of regularization useful in training neural networks. Dropout regularization works by removing a random selection of a fixed number of the units in a network layer for a single gradient step. The more units dropped out, the stronger the regularization429.
Dynamic epistemic logic (DEL) is a logical framework dealing with knowledge and information change. Typically, DEL focuses on situations involving multiple agents and studies how their knowledge changes when events occur430.
Dynamic model is a model that is trained online in a continuously updating fashion. That is, data is continuously entering the model431,432.
«E»
Eager execution is a TensorFlow programming environment in which operations run immediately. By contrast, operations called in graph execution don’t run until they are explicitly evaluated. Eager execution is an imperative interface, much like the code in most programming languages. Eager execution programs are generally far easier to debug than graph execution programs433.
Eager learning is a learning method in which the system tries to construct a general, input-independent target function during training of the system, as opposed to lazy learning, where generalization beyond the training data is delayed until a query is