Tuesday 29 September 2020

Institute of Physics and Technology (FTI)

The Institute trains specialists in applied physics, informatics and information security for science and high-tech sectors of the economy. Graduates will be able to create new mathematical methods and technologies for computer processing of information computer engineer.

The Physicotechnical Institute has 4 areas of training, of which 2 are directly related to IT:

"Applied math"

Graduates will learn to create:

mathematical methods, models and technologies of information processes for computer processing of information,

software and hardware-software means of information protection,

mathematical methods for analyzing information in the Internet space,

automated data processing systems,

models and technologies of cryptographic information protection, authentication, digital signature, cryptographic protocols,

cryptographic tools in banking, commercial and other areas.

The direction of training includes 3 specializations:

Mathematical methods of computer modeling - development of computer models for forecasting and decision-making in industry, economics, ecology; creation of fundamentally new methods and technologies for processing and transmitting information.

Mathematical methods of cyber security - the development of mathematical methods, models and technologies for creating integrated protection systems that combine software, mathematical and hardware tools.

Mathematical methods of cryptology - development of cryptographic methods of information protection and innovative technologies for organizing cryptographic information protection systems.

Security of information and communication systems

Graduates will learn to create:

information protection and security policy in computer systems and networks,

protection of verbal information and information in telecommunication networks and communication systems,

integrated information security systems based on software and hardware-based information security,

software and hardware methods of information protection,

cyber security support systems.

Andrey, FTI:

Applied mathematics  is a more scientific direction. A lot of physics, mathematics, fundamental knowledge of IT (multimedia technologies, structural methods of pattern recognition, quantum informatics). In addition, we learn C ++, Java, OOP, web programming, system programming, cloud computing and GRID, operating systems, databases, networks, quality assurance, software systems design.

ICS security is more of an engineering specialty, but there are many fundamental subjects there as well.

Oleg, FTI:

There is a huge amount of mathematics in applied mathematics , but physics is also enough - everything from mechanics to quantum informatics will be explained here. Plus, there are several semesters of programming, and in general it often has to be used for other disciplines.

On safety  - physics is the least, mathematics is also much less than on PM. Informatics is about the same. It is much easier to study than in applied mathematics.

Phystech differs from all other faculties primarily in the level of knowledge (one of the first places in the KPI). But it's worth enrolling only if you are truly willing to learn. High dropout rate in junior courses.

Nikita, FTI:

Phystech is an institute within the KPI, which was founded by the SBU, like they urgently needed a bunch of cool specialists in the field of mathematics, physics and IT. For several years, MIPT was almost the coolest Math Faculty in Ukraine, but then people got here, who, to put it mildly, do not like it. It's a shame what to say.

Monday 28 September 2020

5 reasons to become an engineer

Engineering is a very attractive field for employers as well as for students. Beyond the great diversity of its fields, this field offers many advantages and opportunities to those who wish to pursue a career there, in particular in Computer Engineering Job Description

As a result, some baccalaureate holders place engineering as one of the first options in their choice of training, while others, still undecided, hesitate to get into the field. Through this article, we present more closely the advantages offered by this field to its students and professionals to better help them orient their careers.

Dear readers, here are 5 good reasons to become an engineer!

An area that fosters creativity 

Both during training and in the workplace, engineering is a field that greatly encourages creativity and innovation. From the first years of training, future engineers are faced with serious challenges to stimulate their sense of innovation through the resolution of certain complex problems they encounter in the context of the projects they are. brought to manage.

Far from aesthetic creativity, creativity in engineering comes as a response to a problem or a need. The work of an engineer therefore consists in producing technical or technological solutions for a problem by developing objects, models, processes, systems or products that are both innovative and useful. If you then feel that you have a sense of creativity and innovation, the field of engineering will be the perfect opportunity for you to deploy your skills!

A rich and fascinating training

Engineer training is very interesting for people who are passionate about the field. The programs are rich in content and are very diverse, with a lot of projects and works aimed at developing the know-how of the students and preparing them for the reality of the job market. In addition, this area is characterized by a growing diversity of its specialties. We can thus observe the continuous birth of new sectors meeting the emerging needs of companies. For example, we have seen the emergence in recent years of exciting new sectors such as artificial intelligence, Cloud Computing or Big Data. These novelties prepare profiles that are specialized, in phase with the latest technological innovations.

A constantly evolving profession

Unlike many other disciplines, the engineering profession is a constantly evolving profession, due to the incessant development of different technologies and techniques. Indeed, each time a significant technical innovation takes place, it automatically leads to the creation of a need for engineers specialized in the field.

In recent years alone, for example, the appearance of artificial intelligence, Cloud solutions or Big Data has created a need for specialized engineer profiles, capable not only of handling these innovations, but also of developing and monitoring them. improve them over time.

The field of engineering therefore opens the doors to the latest disciplines existing on the market, which can even be described as “professions of tomorrow”. And beyond the fact that these specializations are fascinating, their novelty means that they offer their laureates many opportunities and guaranteed employability.

Friday 25 September 2020

Computer Engineering deals with computer systems

Which deal with both the physical components of these systems (hardware) and software (software). The job of a computer engineer focuses on the design, manufacture, planning, operation of computers, and the creation of computer networks or other automated machines. This is the main reason why graduates are in high demand in today's job market.

Academic plan

The program is taught by academic professional staff with experience in basic research and industry, which provides students with a high level of education.

Appointment

The objective of the program is to prepare graduates theoretically and practically for a successful career in Computer Engineering or obtaining a Master's degree in Computer Engineering and / or in highly specialized sectors of this field of activity.

Objectives

University of Nicosia

Providing a broad range of knowledge of the basic principles of mechanics and computer science relevant to the field of computer engineering.

Combining and learning the methods and mechanical components of computers and gaining laboratory experience in the design / assembly of components and programs, data or operating programs of computers, so that students can meet the needs of the field of computer engineering.

Development of specialized skills and knowledge for identification, analysis and development of solutions to problems related to equipment or computer programs computers questions.

Development of communication skills so that students can present their technological ideas orally, in writing and using graphic means.

Introducing students to the professional computer society of engineers and the possibility of admitting graduates to professional organizations such as the Scientific and Technical Association of Cyprus (ETEK).

The development of continuous self-improvement, free thinking and inventiveness to prepare students for the conditions of constant change and for a successful career in computer engineering and / or for further academic study.

Laboratory exercises

The University of Nicosia has well-equipped computer engineering laboratories, where students, under the guidance of their teachers, conduct experiments in small groups of no more than 12 people. Experimental lessons include: electrical circuits, electronics, digital systems, and microprocessors.

Internships

Students, during their studies, get the opportunity to undergo practical training, which increases their potential for building a successful career after graduation.

Professional association

Graduates of the Computer Engineering course at the University of Nicosia have the opportunity to become members of ETEK.

Thursday 24 September 2020

Financial Engineering and Audit

The Financial Engineering and Audit branch aims to train specialized executives on the one hand in the field of financial engineering in terms of diagnosis, evaluation and financial arrangements, and on the other hand, in the field of audit and control in terms of structures, processes and systems becoming more and more complex,

Indeed, the main objective of this sector will be to train laureates capable of using innovative, elaborate and transversal methods allowing the search for original solutions to financial problems which affect the structures of the company in its different facets.

This training is an alliance of conceptual, academic and practical rigor which allows progress, systematization and construction computer engineers careers.

Programs

Financial engineering: financial diagnosis, introduction to finance, financial analysis, financial decision-making, international finance, Financial dashboards, financial risk management, workshops in financial engineering, financial arrangements, LBO, mergers and acquisitions, capital market.

Mathematics: analysis, probability, statistics, financial mathematics, statistical software, data analysis

Audit and finance: initiation to financial audit, accounting and financial audit, workshop on audit techniques, internal audit, social audit, operational audit, financial diagnosis, financial decision, asset management,

Wednesday 23 September 2020

Computer engineers are still in high demand

Survey after survey, the trend is confirmed: the computer engineer remains a precious commodity for companies, whether in IT services or in digital service companies (ESN), formerly SSII. The 2017 edition of the survey on "The workforce needs of companies" conducted by Pôle Emploi and the Research Center for the Study and Observation of Living Conditions (Crédoc), underlines a new time.

Another record according to Apec

"Like last year, the proportion of recruitments deemed difficult is high for engineers and study executives and IT R&D (62.8%) , but remains at the same level as the average of the previous five years", notes the organism in its note of insights and syntheses.

Computer engineering vs electrical engineering

Same story on the side of the Association for the employment of executives (Apec) in its second quarter economic report. For the second consecutive time, business recruitment forecasts by function - such as commerce, human resources, communication, etc. - are “historically high”, at 30%, for IT. Apec explains this enthusiasm by the need for companies to be supported in their digital transformation. IT functions also figure in the various "tops" of the BMO survey: The 15 most sought-after occupations by employers, the top 10 occupations with the largest number of recruitment projects deemed difficult, and the 10 occupations where recruitment difficulties are the highest. Thus, recruiting computer scientists is not,

A more open recruitment process

The large ESNs, which each year hire several thousand IT specialists, the majority of whom are young graduates , are deploying strategies to meet their objectives. Like GFI Informatique, which wants to reach 2,000 recruitments by the end of the year. In detail, the company is looking for 65% of experienced profiles and 35% of young graduates, work-study students and interns.

“Before we weren't looking for so many junior profiles, it's a part that we have doubled over the past four years. This is one way of responding to the market shortage, ”explains Marlène Escure, Director of Recruitment France. A shortage that can be explained in two different ways. Developers are plentiful but also highly sought after, forcing companies to stand out from the crowd to attract them.

But also consultants and data scientists

On the other hand, certain expert profiles are rare, such as architects, consultants in the field of digital, mobility, or data scientists. GFI also uses the operational preparation for employment (POE) of Pôle Emploi, to train qualified people who are far removed from the IT field. “We always have to imagine new recruitment channels, because the response to advertisements and CV databases are no longer sufficient. We must be present on social networks of course, but also inventive on recruitment events. We deploy in particular hiring without CV, based on soft skills, such as the ability to upgrade skills,to work as a team, and we are extending our research beyond rank A engineering schools, ”continues Marlène Escure. More openness, therefore, which benefits the multitude of computer training courses in schools and universities, not always well known to companies.

Tuesday 22 September 2020

What is computer engineering for?

Computer engineering is the process of analyzing and designing all hardware, software, and operating systems for a computer system. It is a combination of two fields: computer science and electrical engineering. Computer science and engineering are often confused as the same thing, but the two areas are very different. Although the responsibilities of computer scientists consist largely of electrical and software development, computer engineers are also trained in software development and the integration of hardware and software.


What is computer engineering

Computer engineers are also focused on computer networks. They should use their knowledge and understanding of the design of logic and microprocessor systems, as well as computer architecture and computer interface. In the course of their work, computer engineers can find answers to basic computer problems by creating the following great technological solution.

Case Western Reserve was the first university to offer a computer engineering program in 1971; however, there are now over 100 accredited universities in the world. Students who wish to pursue a major in this area must have a deep knowledge and understanding of mathematics and science. If a student excels in these subjects, computer engineering is likely to be a good fit for them. Computer engineers also need to have a clear focus on detail, teamwork, and analytical skills. Good communication skills are also required because computer engineers often have to go outside the laboratory to deal with clients and other professionals.

The field of computer engineering is broad, but there are many smaller areas that most students focus on. Sometimes, a person who is studying to be a computer engineer chooses a major with an emphasis on computer architecture or the way information is organized within an organization. Other possible areas of study are database systems, operating systems, or software development. If the student chooses to study any of these or other fields, he will bring a specialized advantage in the field of computer science.

This area is constantly growing and changing due to the rapid pace of technological progress. Therefore, it is important that professionals constantly improve and master new things in order to keep abreast of all new developments. Computer engineers are often required to attend training seminars created by vendors, hardware and software manufacturers, colleges and universities, or other private institutions.

At some point, computer engineering branches out in two directions. The professional must decide whether he or she wants to focus on the technology side of the field or pursue a career that combines technology and management. If the latter option is chosen, he must continue his education with additional courses in business and finance and an MBA may be required.

Monday 21 September 2020

Programming is not for everyone

The idea that everyone  should learn to code has been gaining ground in the tech community lately. But there is one problem: programming is not new literacy.

If you periodically pay attention to the cultural scams of Silicon Valley , then you have certainly heard of the "Learn to Code" movement. Politicians, nonprofits like  Code.org , and even former New York City Mayor Michael Bloomberg are promoting what they see as a skill that will soon be needed by the entire working-age population.

Perhaps this is partly true.

But the real picture is somewhat more complicated.

We live in an ultra-competitive world in which people resort to any means to make ends meet. And it’s totally unfair to sell programming as a ticket to economic salvation for the masses.

Let's take bootcamps for programmers. The success of Silicon Valley software engineers has become a role model, into the mainstream, and today many people dream of starting a startup or becoming an engineer . HBO shows us programmers under their thirties who code at night while making millions of dollars. The American public is fascinated by characters like Elon Musk and Mark Zuckerberg , who seemingly make fortunes overnight. Programming fever has penetrated even the White House, and President Obama push  to include informatics in the general education curriculum.

And for some inexplicable reason, it's not just bootcamps and politicians who encourage people to learn to code.

A powerful chorus of voices echoes this idea from all walks of life, from Hollywood to the luminaries of science and technology. But despite this growing euphoria, I am very skeptical about all these bootcamps. Although our culture has developed a highly attractive image of Silicon Valley, and glossy bootcamp brochures promise well-paying jobs, many of these organizations are not accredited, they do not publish job statistics, and do not care about the employment of their graduates. Yes, there are many bootcamps that enter into agreements with employing companies, but many more are run by vendors of a miraculous panacea who profit from the desperation of the average American.

Don't get me wrong: I believe that programming and development skills are really important. But only in the appropriate context, and only for those people who are ready to achieve success in this matter with sweat and blood. The same can be said for many other skills. So I urge everyone to learn not to program, but to dive deeply into the subject.

If all the attention is focused on the code, then the task of choosing the “right” method for solving a problem overshadows the importance of understanding the problem itself.

Before we start working on a software solution to any problem, we must decide what it is - and whether it is a problem at all. If we allow ourselves to dwell on solving it with the help of code, regardless of whether this problem is related to programming or not, we will lose sight of reason , then we won nothing. I have a close friend from Stanford who once won the Association for Computing Machinery International Programming Championship. And he says that the most important thing that he learned during the championship is the need for a deep understanding of the problem that you are trying to solve is computer engineering a good major.

You have to ask yourself "Is there a problem?" and "Can the Feynman-Taft principle be applied   to explain this problem so that others can understand you?"

A friend of mine told me that even in elite schools, students only read an assignment with a description of the problem once, and then immediately start writing code.... The year my friend won the championship, he learned that even these elite students rush to solve complex problems using the only tool - programming. And he wrote the code only after carefully considering the task at hand. Almost all the time allotted for solving the problem, he spent thinking. And he began to write code only a few minutes before the end. 

He became a champion.

He knew that a hasty typing of the code would not solve the problem; he needed to approach the solution in a cool and collected manner.

The overly emphasis on coding ignores the plight of today's developers. 

In this industry, technology is changing rapidly.

Just a few years ago I used Objective-C, and now I write exclusively in Swift. Developers who have not written a single line in Objective-C are looking for jobs today. Swift is easier to learn, safer, uses modern development paradigms, and is much more elegant than Objective-C. It's great that new developers don't have to deal with the downsides of Objective-C, but it doesn't take into account the harsh realities of the profession.

Developers need to learn quickly, with minimal supervision, and with slightly more motivation than the rumble of the firing guillotine. Someone will say that this is only one of the costs of the profession. But if modern developers get frustrated and start to lag behind - and there are enough indications that they are - then why inspire people to get involved in all this uncertainty? What happens to the person who studied Objective-C day and night only to be horrified by the Swift announcement at WWDC 2014? Will they continue to program in what is quickly becoming a little-used language, or will they start all over again? If you are under thirty, then you are unlikely to face great difficulties. But if you need to feed your family and pay bills, then it becomes a titanic task.

In situations like this, people face all these difficulties without deep knowledge of programming or design. 

If you are learning programming , it will not be easy for you to start making money from it.

Seriously.

I spent over a year self-taught before I could become a freelancer. And the earnings were low. I failed countless interviews because I didn't have a computer programmer diploma.

There were times when I had nowhere to live and had to rely on the kindness of my friends. There were many nights when I wanted to give it all up. But I found the strength to continue.

It was - and is - perseverance, it was she who allowed me to stay in this area.

The truth is, you can't just pick up and do development, even as an intern. You will need connections, people to vouch for you, you need to maintain a GitHub account, and more. Despite the improvement in equality of opportunity, if you are an underrepresented minority, then you need to initially be twice as good as everyone else. And this is only in order to withstand the competition. 

Why are artificial intelligence developers becoming a valuable resource in the IT industry?

Google's California office, home to the company's main AI department . Photo The New York Times

Silicon Valley companies are willing to spend huge amounts of money to attract rare specialists. If you  believe the  information of The New York Times, then the last couple of years, artificial intelligence developers have been very popular. It is such a rare resource that organizations are willing to pay big bucks and share shares on the stock exchange to attract employees with even little experience in the field.

Interest in technology computer engineering education

Google, Apple, Uber, Microsoft - these and other giants of Silicon Valley have been recruiting artificial intelligence specialists over the past few years. Some companies need them to  improve smartphone functionality , while others hope to use them to create safe, self-driving cars. And everyone is ready to invest a lot of money, even in not very experienced workers.

A typical AI developer who works for large companies makes between $ 300,000 and $ 500,000 a year. This number includes not only experienced university graduates, but also self-taught ones with a minimum of experience.

Companies often offer a share of shares to famous people in the industry, which is sometimes valued at several million dollars. A prime example is Anthony Lewandowski, who has worked at Google since 2007 and moved to Uber in 2016. During his time at an Internet search engine company, he earned more than $ 120 million.

Anthony Lewandowski. Photo WIred

According to Microsoft managers, the salary for the development of artificial intelligence is constantly growing. There are several reasons for this, the first of which is the competition of automakers with Silicon Valley for some experts in the field of self-driving transport.

Giants like Facebook or Google spend a lot of money on finding specialists in a rare field, because they are sure that only they will solve their problems. Mark Zuckerberg's companies, for example, need artificial intelligence specialists to develop algorithms for searching for offensive content on social networks.

Another reason: artificial intelligence developers are a very rare resource. According to The New York Times, there are only about 10,000 workers in the world who are capable of solving complex problems in the field. Therefore, tech companies are willing to spend a lot of money to attract as many workers as possible.

Lack of staff

Three years ago there was no such need - in 2014, Google invested about $ 650 million in its artificial intelligence laboratory, 50 people worked there. In 2016, the department grew to 400 employees and a budget of $ 138 million in salaries alone. That is, one employee accounts for about 345 thousand dollars a year.

Google CEO Sundar Pichai talks about the AI ​​technology built into the company's gadgets. Photo by Reuters

Companies are not only looking for specialists among young graduates, but also attract experienced scientists from prestigious US universities. Google once offered University of Washington academic Luke Zettlemoyer a salary of $ 180,000 a year to join the company.

This is three times more than he received at an educational institution. Google and Facebook have also  launched  training programs for aspiring artificial intelligence developers. This is how companies "grow" new and loyal employees.

The resources of the United States as a supplier of specialists are limited, so the giants of Silicon Valley are looking for promising workers around the world. The New York Times learned that companies have a strong interest in China and France, but the country of origin plays a minor role. Now only skills are valued in workers, even the most minimal ones.

Thursday 17 September 2020

Databases: we do not allow the loss of key assets

Information has become the most important resource in the post-industrial era. Much data becomes an attractive target for criminals. Cybercriminals hack information systems, send spam with spyware, organize targeted phishing attacks, and use contacts with unscrupulous users. The cost of information on the black market depends on many factors: relevance, volume, regional and industry specificity, liquidity of a particular record for committing a particular crime, and finally, simply on the subjective feelings of “sellers” and “buyers”.

Corporate information is often hunted not by lone hackers, but by organized criminal groups. By breaking into information systems, they try to take over large databases. They prefer to sell the stolen goods in bulk, and they often act on order. For example, a group neutralized in China stole more than 11 million records. A batch of 300 thousand records was sold for more than $ 30 thousand, and 1 million records for $ 121 thousand. Thus, one record in a large repository was estimated at $ 0.10 - $ 0.12.


Computer science vs engineering

It is understandable that up-to-date payment information is always highly valued. Using the stolen cardholder data, criminals can make a clone card and withdraw money from the account. In September, the payment details of millions of visitors were stolen from the American fast food chain Sonic. Most of the accounts for sale cost between $ 25 and $ 50. The price is influenced by a number of factors: payment system (American Express, Visa, MasterCard, etc.), card level (Classic, Standard, Signature, Platinum, etc.), whether the card is debit or credit, the issuing bank.

In recent years, cybercriminals have targeted social networks and web services, which have accumulated huge amounts of user information. At the end of August, a serious incident occurred on Instagram: as a result of a hack, about 6 million entries were stolen, including data from celebrity accounts. Cybercriminals uploaded information from millions of accounts to the specially created DoxaGram website and offered everyone to buy it for bitcoins - at the rate of $ 10 per search result. And at the end of May, an unknown person was selling on the Internet a database of 100 thousand users of the social network VKontakte, who were allegedly engaged in the distribution of extremist materials. The “product” was valued at 100 thousand rubles, that is, 1 ruble for 1 entry.

One of the most liquid records is insurance data. MFI Soft has systematized the range of advertisements for sale on the Internet sites of databases of Russian insurance companies. 34 unique databases with a total volume of records of more than 5.6 million have been discovered. Experts call the actions of insiders the main reason for the leakage of such information storages. At the same time, the size of the base does not correlate with the price indicator. As a rule, small databases, but containing fairly fresh and complete information, are valued higher than huge archives. Therefore, the cost of one record in a small or medium-sized database can reach 10 rubles, while in large databases the cost can drop to 0.001 rubles per record.

Medical information is also in strong demand on the black market. The range of prices is large here. A single entry can be worth a few cents, but in some cases the cost is thousands of dollars. For example, in the spring, hackers stole the personal cards and photographs of 25,000 patients, including national and foreign celebrities, from the system of the Lithuanian plastic surgery clinic and put them up for sale on the darknet. The price for one entry was from 50 to 2000 euros. In addition, the entire stolen base could be purchased for 344 thousand euros. The direct and indirect losses of companies affected by confidential information leaks can be many times higher than the amounts that hackers ask for on the black market. Based on a 2016 Ponemon Institute study commissioned by IBM. the average cost per stolen or lost recording was $ 141. Compared to the previous year, the average cost of one violation, although it decreased by 10%, is still an impressive $ 3.62 million. After the leak, the company loses on average 5% on the exchange, and the customer churn can reach 7%. The number of such leaks in the world in 2016 increased by about a third, the amount of compromised information increased by more than eight times. The share of high-tech companies accounted for nearly three quarters of all compromised data in the world - about 2.3 billion records, of which 87% were personal data (PD) of citizens. and customer churn can reach 7%. The number of such leaks in the world in 2016 increased by about a third, the amount of compromised information increased by more than eight times. The share of high-tech companies accounted for nearly three quarters of all compromised data in the world - about 2.3 billion records, of which 87% were personal data (PD) of citizens. and customer churn can reach 7%. The number of such leaks in the world in 2016 increased by about a third, the amount of compromised information increased by more than eight times. The share of high-tech companies accounted for nearly three quarters of all compromised data in the world - about 2.3 billion records, of which 87% were personal data (PD) of citizens.

“We are seeing an increase in the number of leaks and the volume of compromised data of high-tech companies, for which information, including customer information, is, as a rule, a key asset, therefore any leak turns out to be very sensitive for business,” said Sergey Khairuk, analyst of InfoWatch Group. - In 2016, the data of hundreds of millions of users of such popular resources as Facebook, Foursquare, GitHub, iCloud, LinkedIn, MySpace, Snapchat, Telegram, Tumblr and Twitter were stolen. Hackers successfully attacked the largest mail services - Gmail, Hotmail, Yahoo, Mail.ru, stole data from clients of telecommunications companies, including Deutsche Telekom, Three UK, Verizon and other operators. Compromise of more than 95% of data in the field of high technologies in 2016 was caused by 31 "mega-leaks" with damage to more than 10 million records each. In the structure of leaks, the volume of citizens affected by personal data has grown significantly, while the shares of payment information, trade secrets and know-how have decreased. Despite an increase in the number of leaks caused by an outside intruder, leaks within high-tech companies are also very dangerous. Thus, the number of leaks due to the fault of an external attacker in the field of high technologies increased by almost 15% over the year, while the change in the distribution of damage depending on the impact vector is minimal. In 2016, the number of cases of deliberate information leaks increased in high-tech organizations, as well as the proportion of qualified leaks associated with fraud or abuse of access rights. Despite an increase in the number of leaks caused by an external intruder, leaks within high-tech companies are also very dangerous. Thus, the number of leaks due to the fault of an external attacker in the field of high technologies increased by almost 15% over the year, while the change in the distribution of damage depending on the impact vector is minimal. In 2016, high-tech organizations saw an increase in the number of cases of intentional information leaks, as well as the proportion of qualified leaks associated with fraud or abuse of access rights. Despite an increase in the number of leaks caused by an external intruder, leaks within high-tech companies are also very dangerous. Thus, the number of leaks due to the fault of an external attacker in the field of high technologies increased by almost 15% over the year, while the change in the distribution of damage depending on the impact vector is minimal. In 2016, high-tech organizations saw an increase in the number of cases of intentional information leaks, as well as the proportion of qualified leaks associated with fraud or abuse of access rights. while the change in the distribution of damage depending on the impact vector is minimal. In 2016, high-tech organizations saw an increase in the number of cases of intentional information leaks, as well as the proportion of qualified leaks associated with fraud or abuse of access rights. while the change in the distribution of damage depending on the impact vector is minimal. In 2016, high-tech organizations saw an increase in the number of cases of intentional information leaks, as well as the proportion of qualified leaks associated with fraud or abuse of access rights.  

Wednesday 16 September 2020

It is difficult for the supermind to communicate with people - they simply do not keep up with it

Introduction

Machines have already surpassed humans in many areas, but the general level of human intellectual development is incomparably higher. But the day is not far off when machines will become superintelligent.

The Fritz program plays chess significantly stronger than the leading grandmasters, but it cannot be called superintelligent, since it is superior to humans in only one narrow area.

Nick Bostrom believes that superintelligence is an intellect that is many times superior to the most outstanding people in mental development, scientific and technical activities, worldly wisdom and the development of social skills.

The book provides a variety of information about the history of artificial intelligence and the current state of affairs. The author ponders whether the superintelligence will be helpful or destructive, friendly or hostile.

Reading this book is very difficult: scientific vocabulary, examples from physics, mathematics, economics and nanotechnology, along with numerous tables and diagrams, do not allow the reader to relax for a minute. The author himself admits that he tried to make the book easier to read, but he failed. He also warns that not all information is reliable and has scientific confirmation. At the same time, the book is of interest to those who are not indifferent to the future of humanity and who are interested in modern technologies computerquestions.

The past and present of artificial intelligence

After the appearance of the first computers in the 1940s, scientists started talking about the imminent emergence of the superintelligence. The predictions did not come true, but modern futurists, like their predecessors, believe that superintelligent machines will be created in the near future.

History of artificial intelligence

1642-1940s - the zero generation - the simplest mechanical computers.

1940–1955 - first generation - vacuum tube computers.

1955-1965 - second generation - computers on transistors.

1965-1980 - the third generation - computers on integrated circuits.

1980–… - fourth generation - computers on huge integrated circuits.

The fifth generation, focused on distributed computing, was never created; it was to become the basis for devices capable of imitating thinking.

In the summer of 1956, a group of scientists came to the first symposium on artificial intelligence at Dartmouth College in the United States. It was this event that became the starting point for research in the field of artificial intelligence, and many of the participants in that symposium gained worldwide fame. At first, they created only small systems, each of which in the laboratory could do something previously inaccessible to machine intelligence.

One of the earliest systems, The Logical Theorist, succeeded in proving theorems, one of which was more elegant than the original.

In the mid-1970s, the fashion for artificial intelligence passed. Sponsors refused to finance projects to create artificial intelligence, because they considered this direction unpromising.

The new golden age of artificial intelligence can be considered the early 1980s, when Japan launched a major project to create fifth-generation computers. The project was funded by the government in conjunction with commercial structures, and its goal was to create a high-performance machine that thinks like a person and can work with large databases. Many countries have followed Japan's example and resumed work on artificial intelligence. The developers tried to create the so-called expert systems - programs that process significant databases designed to partially replace a person in various fields of activity. Hundreds of such systems were created, the codes for which were manually written by thousands of programmers. However, after a few years it became obvious that to develop, controlling and updating expert systems is difficult and expensive. Thus, the project to create fifth generation computers ceased to exist by the end of the 1980s.

Since the 1990s, methods of neural systems and genetic algorithms, based on modeling the human brain, began to develop.

In what areas is artificial intelligence superior to humans

By now, machine intelligence has already outstripped humans in many activities. Previously, people naively believed that in order to become a strong chess player, you need to have developed abstract thinking, be a strong strategist, be able to create cunning flexible plans and try to "read" the opponent's thoughts. It turned out that the strongest chess player can be defeated by a program with a special algorithm connected to a powerful processor. However, such artificial intelligence is limited only to the game of chess.

Checkers. In 1994, the CHINOOK program was able to defeat the reigning world champion. This was the first time that the program actually became the world champion in intellectual games.

Chess. In 1997, the Deep Blue program won a match against world champion Garry Kasparov, who confessed that he had noticed glimpses of real intelligence and superhuman ingenuity in the machine.

Scrabble. In 2002, the program defeated the strongest players.

Today, these achievements are no longer surprising. As John McCartney (the American scientist who first used the term "artificial intelligence") remarked: "As soon as artificial intelligence begins to function, people stop considering it artificial intelligence."

Tuesday 15 September 2020

Internet in the 90s? How it was? Part 1: Gopher

I am starting to publish a series of reviews devoted to Internet protocols, common and not very common in the nineties of the twentieth century. This is done primarily in order to identify alternative methods of communication within the global web in the light of endless leaks of corporate and private information. Without any kind of blockchains and domains of the Onion zone. Simply put, without the modern mythology of planned obsolescence.

And the first Internet protocol of the nineties with which we will begin our acquaintance will be Gopher. He was chosen due to the fact that a rather negative reputation has developed around him in narrow computer circles. Namely as a non-functional and useless protocol. But this is only because the authors of such statements have never delved into the settings and configuration of this product.

The Gopher standard and viewer (browser) were developed at the University of Minnesota in 1991 as part of a program to enable easy document sharing over the Internet. Between people, institutions and even countries. Gopher's popularity increased when the Veronica and Jughead search engines were developed. The protocol itself was named after the gopher, the mascot of the University of Minnesota football team.

But in February 1993, the aforementioned institution decided to start charging licensing fees for Gopher servers. What caused a reasonable panic among the owners of these services. At the same time, CERN gave up the rights to the WWW standard, which stopped the popularization of Gopher. It wasn't until 2000 that the University of Minnesota re-licensed the development under the GNU General Public License. But as you can imagine, precious time was lost.


Electrical engineering vs computer engineering

Nevertheless, this protocol today may be of interest not only to corporate developers, but also simply to lovers of closed data exchange. Outside the ubiquitous Google, Microsoft and Yandex bots.

The first thing that strikes you about Gopher is its minimalistic requirements and dimensions. The server archive takes some 290 kb, and the WSgopher client 368 kb, respectively. The second thing that pleases me is full cross-platform functionality. There are servers and browsers for absolutely any operating system from Dos and Unix to MacOS and Windows. For example, for the review we used Motsognir for Winodws 9x, which works great even in XP. Available for download like other solutions from this page: http://www.jumpjet.info/Offbeat-Internet/Gopher/Servers/OS/specific.htm

The third and not unimportant feature of the protocol is the simplicity of server configuration. If in WWW we are used to endless configuration files with many modules connected, then with Gopher things are easier. There are only two configuration files. One directly owned by the server developers, but designed according to protocol standards. The full description of which, in the case of Motsognir, is in the manuals.pdf file, inside the archive. The second gophermap file is created if desired by the owner in the public (shared) directory with files. It manages not only the listing of catalogs, but also their design. It's kind of a mixture of a modern .htaccess file with the usual html tags of a different standard. Examples of gophermap file settings can be found on the Internet.

At the same time, the most important myth discovered among Russian-speaking commentators was the allegedly static nature of the protocol. Although this is not the case. After all, the Gopher server supports the inclusion of not only GGI, but even PHP. That is, de facto, the protocol implements dynamic content, despite the ascetic appearance of the basic configuration. Moreover, if desired, the server can be configured in such a way that access to the service will be carried out only through accounts exported from Telnet for UNIX or Linux. Simply put, it is possible to password-protect directories and the server itself.

For developers, the task is also simplified due to the fact that Gopher is open source. So, if you wish, you can edit it in your own way. This protocol functions as standard through the 70th port. And it also allows you to prescribe the specialization of files for viewing by external applications of users.

All in all, I have a very good impression of setting up and configuring the Gopher client / server. And its flexibility and cross-platform allows you to use it in conjunction with other alternative solutions. For example, to provide direct access to files uploaded via FTP with separation of rights by account, as well as read correspondence in conjunction with UseNet and other protocols. But more on that in the following reviews.

Monday 14 September 2020

How to meet an IT specialist: 5 non-invented stories

Before, every mother wanted to see her daughter as the wife of an officer. Now the 21st century is the age of technology. Many girls want to come home and say, “Hurray! I am meeting with an IT specialist! " Five girls told the SoftTeco journalist about their experience of meeting and developing relationships with a programmer. 

Where can you meet a programmer?

Katya (the programmer's wife) : My husband and I met in a place that has nothing to do with his work. But fully reflecting my interests - I'm a book blogger. It happened at a book presentation at Gallery Y. In those years, she was on Independence Square. I can definitely say that there is no universal way of how and where you can meet a programmer. They are very versatile people, which means they can be found everywhere. 

Lera (the programmer's wife) : My future husband and I worked (and still work) in the same company - SoftTeco. A colleague invited me to eat pizza with them. I met the guys, among whom was my husband. In general, people spend more than 50% of their time at work, so finding your soul mate there is most realistic. Plus, during responsible work processes you see a person "in action" and you can appreciate his character. Qualities such as reliability, resistance to stress and the ability to compromise are very important in family life :). You can also meet at IT-meetups, at IT-courses, in a general company of friends (if you have techie friends), and even in a cafe. The main thing is that you are an interesting and developed person and that you have something to talk about.


What types of engineers are there

Lina (the programmer's wife) : My husband wrote to me on VKontakte and offered to take a picture. As you might guess, he didn't have a camera :). So that programmers can go to the trick to meet with you. I think that you can get acquainted on the "programmer" platform a la GitHub. Find the mail of a programmer who has many stars on the project and write. 

Vika (programmer's wife): I met my husband on Tinder. From my own experience, I can say that foreign programmers love this platform. My husband has a brother and friends - programmers. And they're all on Tinder. 

Masha (programmer's girlfriend) : I made acquaintances with programmers in very different places - from the queue at the bakery to the bar. Programmers are ordinary guys who spend their time in the most ordinary places and do not put salary numbers at the top of their judgments about themselves and other people. I think there are no universal locations for IT dating. 

How to charm a programmer?

Katya: Everything is individual. Someone likes homemade cutlets and football, while others like Chopin and Italian coffee. I think that men, and we are talking about them, always like warm-hearted girls, interesting and smart interlocutors, not dummies.

Lera : Everything works the same as for ordinary men. But given that programmers are people of intellectual work, they like smart girls.  

Lina : My husband loves to talk about his work. For 2 years of relationship, I have learned 3 words from his professional vocabulary: release, donate and reject. I don't understand anything else. I think the key to a successful relationship is to say “Yes, I think you did the right thing” to all stories about work. 

Vika: Programmers are shy and calm. They are attracted by modest, feminine and well-bred girls. They want to feel the female energy next to them, I'm not a kid-friend.

Masha : Many programmers have a rather monotonous job. Therefore, in their personal lives, they are looking for a breath of "fresh air". In my experience, many programmers enjoy hedonistic girls who fill their lives with unusual events, new emotions and lightness. There is a separate category of geek programmers who storm code and fantasy worlds. They are looking for girls who will be on the same wavelength with them, have a "technical" mindset and are ready to remember the names of all the heroes of some fantasy board game. 

And what is contraindicated to do?

Katya: Look for a programmer on purpose and he will like trying.

Lera: I would not focus on the level of income and would not try to single out programmers as a separate social class. A woman, first of all, must be self-sufficient. And thanks to this, she will meet a cool guy!

Lina: You can't underestimate his interests. It will be important for the programmer to watch the apple presentation if he is an iOS developer. Therefore, you should not be offended that you will not have a date on the 10th of September. 

Vika: You can't come on dates with a neckline and a short skirt. In my experience, programmers are more attracted to an elegant, discreet image. I know that programmers are afraid of mercantile girls and may specifically not post photos with material benefits. 

Masha: Often programmers are guys who, even from a young age, earn good money and have managed to get pretty tired of jokes about high salaries. Therefore, if they are determined to have a serious relationship, then they easily catch girls for manipulation and attempts to take control of their budget. You yourself understand how it all ends. 

Friday 11 September 2020

AI and machine learning in medicine

Artificial intelligence (AI) will become one of the most important factors affecting the development of human society in the coming years. We put into this concept all areas of development of the sphere, including Machine Learning (ML), Generative Adversarial Networks (GAN), Gradient-boosted-tree models (GBM), deep reinforcement learning ( Deep Reinforcement Learning, DRL), etc.

Acting as a cloud provider, Cloud4Y has partnered with various healthcare organizations. It was always an interesting experience with new technical, legal, psychological difficulties that had to be overcome.

Business, technology, and healthcare are areas where AI is most in demand. Let's take a look at how AI / ML tools can influence the quality of healthcare delivery.

The idea of ​​using artificial intelligence in medicine dates back to 1972, when MYCIN was launched at Stanford University. It was a prototype AI program used to study the issue of blood poisoning. Early AI research continued mainly in American institutions (MIT-Tufts worked together, actively developed the technology at Stanford and Rutgers University. In the 1980s, Stanford University continued its work in the field of artificial intelligence as part of the project "Medical Experimental Computer Artificial Intelligence in medicine ”(SUMEX-AIM).

Thanks to the growth in computing power and the emergence of new artificial intelligence technologies, work in this direction has become much more active. News regularly appears about the next scientific discovery made with the help of neural networks and machine learning. What interesting things can you tell about the possibilities and prospects of AI in medicine today?

AI in radiology computers questions

Numerous medical imaging data is stored in abundance in small local systems. But what if you leverage deep learning by uploading data to the cloud and feeding it to AI? Machines and algorithms can efficiently interpret imaging data by identifying patterns and anomalies.

Most obvious use case: a radiologist / radiologist assistant, who is involved in identifying and localizing suspicious skin lesions, lesions, tumors, internal hemorrhages, brain lesions, etc. The computer works faster and more accurately, and therefore is able to give out specific data about the disease a few seconds after processing the information. Man cannot do that.

There is another point. Highly qualified specialists are expensive and in great demand. They are under serious pressure, literally bogged down in the streams of data that are pouring on them from all sides. If you believe  this article , such a specialist should issue a diagnosis every 3-4 seconds. Machine intelligence can improve the skills of the ordinary specialist, helping him to sort out difficult situations. Thus, reducing the number of false diagnoses and saving lives.

The identification of rare or difficult to diagnose diseases often depends on the experience of the doctor, as well as the degree of "neglect" of the disease. Simply put, until the sore gets out, it may not be recognized. By training a computer on large datasets containing raw images and many forms of pathologies associated with certain diseases, it is possible to improve the quality of diagnosis and the number of diseases identified. This idea is being developed by the startup AIDOC. 

AI is able to improve the quality of the work of medical institutions by automating the time-consuming and responsible part of the work of doctors. With the help of computer algorithms, you can also control the effectiveness of treatment and the quality of the operation performed, and predict the rate of recovery of the body.

Microsoft's InnerEye project is a good example of such technology. He suggests using ML techniques to segment and identify tumors using 3D X-rays. This can aid in accurate surgery planning, navigation and effective tumor contouring for radiation therapy planning.

AI in pathology

Pathological diagnosis includes examination of a tissue section under a microscope. Using Deep Learning to train an image recognition algorithm, combined with human experience, will provide more accurate diagnostics. Analyzing digital images at the pixel level can help detect lesions that the human eye can easily miss. And this will provide a more efficient diagnosis.

Such technology is being developed, for example, by Harvard Medical School. The algorithm uses speech and image recognition technology to recognize images with pathologies and trains computers to distinguish between cancers and non-cancers. Combining this algorithm with human work resulted in 99.5% accuracy. 

Machine Learning and Medical Science

Petabytes of data are generated in all kinds of medical facilities. This data, unfortunately, is usually randomly scattered and unstructured. This is by no means a reproach towards doctors. They have not so much to treat as to report on treatment. However, chaos greatly interferes with planning and global monitoring of the health of a particular country or the world as a whole.

An added complication is that, unlike standard business data, patient data does not lend itself well to simple statistical modeling and analytics. A powerful AI-powered cloud platform with access to medical databases is capable of efficiently analyzing mixed information (eg, blood pathology, genetic traits, X-rays, medical history). It is also (theoretically) capable of analyzing input data and revealing hidden patterns that are not visible due to an excessively large amount of medical information.

Interpretable AI models and distributed machine learning systems are great for these tasks. They will allow not only to effectively develop medical science, finding new patterns and racial / sex / age characteristics of people, but also to form more accurate data on the health status of the population in specific regions.

Surgical assistant robots

Already, many operations are carried out using computer vision and manipulators controlled by a surgeon. This is a significant part of the development of medical technologies, leveling the factor of human fatigue and increasing the efficiency of procedures. AI robots are great at helping conventional surgeons. For example:

Supervise the work of the doctor, acting as an insurance against inattention;

Improve visibility for the surgeon, remind him of the sequence of actions during the procedure;

Create accurate, minimally invasive tissue incisions;

Reducing the pain level for the patient through the selection of the optimal incision geometry and suture.

Thursday 10 September 2020

Speech therapist with computer vision. How does the Belarusian application CatZu follow your language?

This issue on the ITitov YouTube channel contains a lot of cats and useful information for adults and their children. The CatZu app is a smartphone speech therapist that stands out from other developments in its analytical mind and persistence.

Today, professional defectologist Antonina Sudilovskaya simply breaks TikTok with her articulation exercises. In her account, a virtual cat, which is the main character in the CatZu application, increasingly began to appear in the lead role. His popularity in the youth social. the network can easily be transferred to users of the mobile service. Also, the virtual cat has already been checked out in several preschool institutions. He bribes everyone with his charisma and lures them into his neural networks computer engineer majors.

The essence of the application is that the cat exercises together with its users. If you raise or lower your tongue incorrectly, do not smile enough or make your lips a duck poorly, then a mobile assistant with a computer vision system will not let you into a new level of the game.

“Our cat does not know how to talk, but it helps to motivate people to do articulatory gymnastics, because the procedure itself is boring. Today, some experts are already grateful that such an additional tool for work has appeared , ”notes the founder of the CatZu startup, Antonina Sudilovskaya.

The startup CatZu is about a year old, and now the team is starting a new stage - testing the product by everyone. We also plan to find investors for marketing research.

Where can you find this virtual speech therapist today, and what additional bonuses does the service offer? All the details are in the new release on the ITitov YouTube channel.

Wednesday 9 September 2020

Which programming languages ​​is a waste of time and effort?

Steve Baker

There are hundreds, if not thousands, of programming languages. However, most of them are rarely used to solve highly specialized tasks.

Some languages ​​were invented just for fun and pampering (Brainfuck, Whitespace - it is simply impossible to write something adequate in them). Some were created purely for theoretical purposes (subleq is the simplest programming language with only one command). Other languages ​​have tried to implement an interesting idea, but have not become useful tools (Befunge is an unusual two-dimensional programming language). Some are used in very narrow areas and for specific tasks (NQC is only used for programming Lego robots!).

Some programming languages ​​are so outdated that they are now used only for projects in small, enduring areas. It hardly makes sense to learn APL, Snobol, or COBOL today. BASIC is also unlikely to be a promising choice, like Dodo.


How to become a computer engineer

First of all, when choosing a language, you need to look at the task that you face, or at what area you want to build your career in the future. Based on this, you need to try to choose the appropriate language, the study of which is worth doing.

Most programmers are faced with the need to work with web pages at some point, so almost any developer should learn JavaScript. If you want to create games, then you should pay attention to C ++ and C #. If you are planning to connect your work with web servers, then knowledge of PHP will definitely come in handy. If your job is in the Linux kernel, then C is indispensable. If your plans include working on projects in the banking industry, then you can learn COBOL, but you will definitely need knowledge of Java.

There is a very obvious reason for the existence of so many different programming languages. If there are so many of them, then most of these languages ​​are needed by programmers to work, even if some of them are rarely used.

But everything, first of all, depends on what exactly you are going to do in this programming language. If you know for sure that you will never have to work with web servers, then you can forget about learning PHP.

If writing high-performance code is not your task, then there is no need to learn "too complex" C ++. If you do not plan to associate your life with the development of 3D graphics, then GLSL and HLSL are unlikely to be useful to you. If you don't have to do the development of projects for the US military, you can ignore Ada.

New programming languages ​​appear constantly, but there is no need to strive to learn all languages ​​- this is not only impossible, but also banal does not make sense. Just be prepared to learn a new language from time to time. However, it should be borne in mind that a detailed study of the language can take a lot of time. 

Nikos Kapatais, programmer

Much depends on what kind of project you are working on. After college, I spent three years trying to hone my C # skills. I chose this particular language because I saw that many companies are working with it. During these three years I also had to work with JavaScript, Java, Python and even C ++.

I ended up getting a job at a company that uses VB. NET. So, naturally, I had to move away from using all the past languages ​​and focus all my attention on VB. NET. Is Python and JS a waste of time? Of course, you can say. However, it should be noted that if I rather find working with C ++ strange and not very useful, then Java and C # really helped me.

No one can tell you exactly which language is going to be a waste of time for you. If you are sure that you will never have to work with this language or it will never be useful to you, then this language definitely does not make sense. However, how can you be sure of this?

Therefore, now just choose the language you like to study and be prepared for the fact that at any time you may need some kind of language and you will need to learn it, regardless of whether it is Swift, Java or PHP.

Wang Yo,  Full stack developer

Learning a programming language can only be a waste of time if, after learning the language, you cannot work on the required project. Therefore, a lot depends on the situation - everything is very individual here. I also don’t think it is necessary to choose a programming language to learn based on what level of popularity it enjoys. And here are some reasons for this.

The studied programming language may be a beginner, but it is already gaining popularity, and in the future it may become actively used.

For example, compare the popularity of Python as soon as it was released versus its popularity today. Who knows, maybe Red or Rust can go the same way and break into the rankings of the most popular languages.

You are learning a language that can hardly be called highly used, but there is a certain area where you can use it, which promises you a high-paying job.

COBOL is such an example. Many people claim that this language is "dead" and there is no point in learning it, but many companies still use it in their projects and offer jobs to COBOL developers.

You are planning to independently search for projects on freelance or you are the owner of a company, so you yourself can decide which development language to use for this or that project.

It is very common for small firms to hire a third party specialist or small outsourcing company for development. In this case, the employing company often does not set strict requirements for the use of a particular programming language - for them, compliance with the requirements and the result itself is more important.

Learning one programming language in the future makes it easier to learn new languages.

Many people say that learning Visual Basic or Haskell is completely pointless. However, the skills that a specialist acquired while mastering these languages ​​make it easier to understand such popular languages ​​as JavaScript or Java in the future. A developer with similar experience gains a broader outlook, which helps him to look at the solution from a different point of view.

Tuesday 8 September 2020

Answer 1 :

Computer Science vs Computer Engineering: What's the Difference?

New entrants in the field of computing often use the terms computer science and computer engineering interchangeably. While they have a lot in common, they also have tons of differences. While computer science deals with the processing, storage and transmission of data and instructions, computer engineering is the amalgamation of electrical engineering and computer science. Therefore, when choosing a degree program, think about your preferences and make a decision.

As the needs of the computer industry become more and more specific, higher education and degrees are becoming more specific. It also created better job opportunities and more chances for students to learn whatever they enjoy. It also made the process of choosing the right program more difficult.

COMPUTER SCIENCE AND COMPUTER ENGINEERING: DIFFERENCES AND CORRESPONSIBILITIES

While the titles of computer courses have become more standardized and you can get a pretty good idea of ​​what you are about to study, people don't know the clear difference between basic terms like computer science and computer engineering. So, to explain this subtle difference (and similarity), I wrote this article computer engineer jobs.

COMPUTER SCIENCE IS NOT LIMITED TO PROGRAMMING

The biggest misconception associated with computer science is that it's all about programming. But there is much more to it than that. Computer science is a generic term that encompasses 4 main areas of computing. These areas are:

theory

programming

languages

Algorithms

Architecture

Find out more at:

Computer Science vs Computer Engineering: What's the Difference?

Answer 2 :

There were many differences in my university. You could get a bachelor's degree in computer science, but not computer science. This meant far fewer general math and science courses. Computer science can take on the same mathematics as English majors. Computer Engineers took Engineering Core classes. They included six semesters in mathematics, three in physics, two in chemistry, laboratory sessions, and introductory classes in civil, systems, and electrical engineering. Once computer engineers completed the basic classes, they also completed classes in machine code, logic design, and circuit boards. Programming classes overlapped and included multi-threaded design, data structures, object-oriented programming, comparative programming languages, operating system design and compilers. It all comes down to what you want to do with your career. The computer scientists at my university did more direct programming and some business / computer science pursuits than computer engineers. It can also affect your pay after graduation. In my experience, engineers have a higher starting salary. Good luck! Either way, this is a great field with many options. Good luck! Either way, this is a great field with many options. Good luck! Either way, this is a great field with many options.

Answer 3 :

Well, I'm not entirely sure about your question, but I am assuming you are asking about BSC in CS and BE in CS.

In BSC, you are more knowledgeable about the theoretical aspects of the subject and the course curriculum is more research-oriented, whereas in BE, you have to be more practical, if not just theoretical. A BE degree from a good college is more likely to earn you employment versus a later one (depending on your college performance).

Hope my answer clears your doubts!

Answer 4 :

People often confuse computer science with computer science. They somehow fall into the same category, but differ from each other. Computing is vast, while computer science is only a part of it, which further branches into various key areas such as algorithms, programming languages, design, etc. If a person decides to learn on computers, he begins to learn the basics of various while earning a bachelor's degree in computer science will help you understand the details of one particular branch of computing, increasing your chances of specializing.

Answer 5 :

It depends on the institution granting the degrees as to what exactly is involved in both degrees. You really should research the schools you are considering.

In my case, my school had a CS school that provided CS degrees and focused on theory, software development, etc., and a CE school that focused on hardware design. In my final year undergraduate, the two schools merged and my expected CS degree came in as a CS&E degree, so from my point of view there was no difference between BS CS and BS CS&E.

Answer 6 :

The Bachelor in Computer Science is a 3-year course and you will have a Bachelor's Degree, while the Computer Science course is 4 years from a college of engineering and upon graduation you will receive a BE / B.tech degree and you will become a certified engineer.

Answer 7 :

Looking at the curriculum you showed me, I would say that the CS course is better if you want to keep working in software development. There are some critical questions that CS has but not CSE, such as Databases and Algorithm. I would go for CS.

Network Construction - Operational Phase

When installing a LAN, performed by EcoLAN, reliable, modern hardware solutions from Cisco, Allied Telesyn, 3COM, D – LINK are used.

It takes weeks or months to build the cable infrastructure. The contractor carries out the installation, eliminates hidden defects identified by testing, transfers the system to the Customer. The operational phase, during which the network is being built and modified, lasts for years. To build a network, the Customer creates standard, switched and combined channels. Generations of computers are changing, the performance of network devices is increasing. Data transfer rates increase many times over, which increases the network load. The durability of the network is ensured by its redundancy; reliability depends on the quality of materials and the quality of installation.


Computer engineering vs computer science

EcoLAN assembly teams are highly qualified and have extensive practical experience. Knowledge of the subject, control of hidden work at all stages allows EcoLAN to ensure the quality of cable systems. The quality of installation can be judged by the reserve of parameters. The practice of EcoLAN shows that some of the SCS lines of category 5e are being tested according to the parameters of category 6. We lay the SCS in strict accordance with the standards, taking into account all the requirements for the permissible filling of cable channels, taking into account the permissible loads of trays, in compliance with the requirements of electromagnetic compatibility at laying cables, we install telecommunication grounding systems in all SCS, regardless of the presence of screened lines.

An important point is compliance with the SCS administration requirements, which ensures the convenience of network operation.

Certified SCS manufactured by Panduit (USA), Nexans IES (UK), Nexans CS (France), Molex PN (USA), BICC Brand-Rex (UK), Eurolan Solution AB (Sweden) are guaranteed for a period of 20–25 years.

Other electrical work

Installation of SCS in most cases is supplemented by electrical work. Each workplace requires computer outlets. As a rule, household outlets are provided that are connected to their own group lines. A number of offices require the installation of lighting systems.

The electrical work carried out by EcoLAN includes the installation of the power supply system, work and emergency lighting.

Customer benefits

Guaranteed quality of power and low-current systems.

Safe systems. We install telecommunication grounding in all SCS, regardless of the presence of shielded lines. This is a mandatory requirement of European and national standards;

We carry out large volumes of work in a short time;

The customer bears the minimum investment and operating costs;

The customer receives the best prices for large and medium-sized projects;

The customer is comfortable working with us. All issues are resolved by a personal manager with the necessary powers;

The customer can adjust the requirements during the installation process. We immediately reflect changes in the as-built documentation and carry out the installation with minimal additional costs;

High-quality professional marking makes it convenient and pleasant to work with the systems. SCS subsystems on panel ports are additionally color-coded in accordance with SCS standards;

The customer receives documented test results;

We help the Customer to avoid mistakes.

Friday 4 September 2020

AZ-500T00. Microsoft Azure Security Technologies

Microsoft Azure is a cloud platform on which you can deploy infrastructure solutions, databases, applications, services, and functions. It runs the well-known office cloud applications Office 365 and Microsoft 365. Here you can also store and process large amounts of data, use ready-made platform services in order to add additional functionality to custom applications. More than 260 services are running on Microsoft Azure . For convenience, they are divided into 22 areas, including DevOps, analytics, databases, security, blockchain, hybrid environments, artificial intelligence and machine learning, integration, IoT, mobile applications, multimedia, augmented reality, development tools and several others.

Microsoft pays great attention to information security issues. The company annually invests $ 1 billion in this area. More than 3,000 security professionals work here to ensure data protection and user privacy. Microsoft considers Azure to be the most secure cloud in the world and can show more certifications to prove the platform's security than any other similar system. We add that Microsoft Azure also complies with the requirements of the European General Data Protection Regulation (GDPR).

The Microsoft Azure platform was built on the "security in mind" principle. All platform services have built-in protection and threat detection tools. Specialized tools have been developed such as the Azure Security Center. The cloud can protect identities, networks, data and other secrets from the most common types of attacks such as DDoS, spoofing, or cross-scripting. It should also be noted that the "human factor" remains the main threat to information security. Microsoft Azure and other cloud platforms declare a “shared responsibility model”. This means that the cloud provider is only responsible for the “low-level” security of the virtual infrastructure and the physical security of the data center. Customers and users are responsible for the security of networks, operating systems, applications, data.

Secure cloud storage of data, including “big data”, has a high level of functionality and scalability. Data protection against unauthorized access and loss is ensured through encryption and replication, while providing the ability to use your own secret keys. Especially sensitive data, passwords, keys, connection strings, and certificates can be stored using Azure Key Vaults.

When working with any cloud service, it is important to remember that invulnerable systems do not exist, especially if they were manually deployed. Microsoft Azure provides the highest level of information security in the industry, but this platform will not be able to completely protect from the problems associated with the "human factor". Most problems in technical systems are caused by humans. Therefore, the best way to ensure the stability and security of cloud services is to install and configure them automatically. Therefore, in Microsoft Azure, everything is automated to the maximum, it provides the opportunity to use managed services or Azure Resource Manager templates. There is no need to manually deploy and configure components, the platform can handle this easily and safely how to become a cloud architect.

For security administrators and information security professionals, there are many options and conveniences. These include Azure AD functionality, a security center, VPN gateway builders, specialized DDoS protection, Azure Information Protection, Key Vault, and an Azure security analytics tool. Sentinel.

Microsoft Azure is a cloud platform. For her, both landline and mobile users are remote. This means that working with them is associated with increased risks. For account management, Microsoft Azure offers a range of services under the general name Azure AD. This one-stop identity management and security platform controls over 1.2 billion identities, performs over 8 billion daily authentications, and protects users from 99.9% of cyber attacks. For account security, Azure AD provides many tools, such as multi-factor authentication, Azure AD Identity Protection with automatic risk and threat analysis and detection.

For hybrid IT infrastructures that combine on-premises and cloud, we recommend using the fast private Azure Express Route. This will allow any, even encrypted corporate traffic to be transmitted exclusively through a separate private channel, and not the public Internet.

It is believed that IT and agriculture are the drivers of the Ukrainian economy. It is known that companies operating in these particular industries actively use Microsoft Azure in the IT infrastructure of their enterprises. If for IT companies this fact seems natural, then for large agricultural enterprises it is most likely indicative, demonstrating the advantages of clouds in practice.

Cloud services, including those based on Microsoft Azure, have become a daily routine for Internet users. Applications only for personal computers are becoming less and less popular, since now we spend 90% of the time at the computer in browsers. In the future, there will be no need for applications for computers at all; Internet access to applications deployed on the cloud architecture will be sufficient. An important point, in this case, the issues of information and network security become especially relevant.

Thursday 3 September 2020

Microsoft Teams will soon replace Skype for Business

The news that Microsoft will soon stop supporting Skype for Business and plans to transfer everyone to Teams was received by the companies rather calmly, because new customers have been receiving Teams with Office 365 since last year, and old customers can continue to use Skype for Business.

What will happen with Skype for Business Online and Server 2019?

We propose to sort out the dates when the support for Skype for Business will be finally removed.

Starting September 1, 2019, all new customers who purchase Office 365 receive Teams and do not have access to Skype for Business Online.

Skype for Business Online itself will be supported until July 31, 2021. Accordingly, all online and hybrid customers must migrate their users to Microsoft Teams.

The latest version of Skype for Business Server 2019 was released in October 2018 and will tentatively run until January 9, 2024.

Questions and answers. For more information on the Skype for Business software product lifecycle, visit the Microsoft website .

How do I prepare to migrate to Microsoft Teams?

Sooner or later, Microsoft will move all of its old customers to Teams. Yet new ones get Microsoft Teams with the purchase of Office 365 and Microsoft 365.

Learning Center "Networking Technologies" launched a new authorized course  MS-700T00 "Managing Microsoft Teams".

This course will be useful for those who: architect job demand

wants to know what Microsoft Teams is and how all of its components work together;

plans to implement governance, security, and compliance for Microsoft Teams;

Must prepare the organization's environment for deploying Microsoft Teams.

intends to deploy and manage teams;

must manage and troubleshoot communications in Microsoft Teams.

TOGAF webinar. Creation and management of IT architecture

The number of changes in the external environment and business is growing at an insane rate, and therefore the requirements for the adaptability of companies are increasing from year to year. When goals change, the strategy changes, which in turn requires changes in business processes and priorities in work on projects in which the IT department plays a significant role. And in order for IT specialists to work as efficiently as possible, the quality and speed of making managerial decisions increased, a properly built and, most importantly, modern IT architecture is needed.

On August 27 at 10:00 (Kyiv time) we invite you to a free webinar “TOGAF. Creation and management of IT-architecture of the enterprise ” , where we will present our course  NT-TOGAF: TOGAF. Creation and management of enterprise IT architecture , we will introduce you to the latest standards in enterprise architecture management and tell you about the modern universal TOGAF framework for building enterprise IT architecture.

At the webinar, we will consider the following questions: salary ranges for architects

What is enterprise architecture for?

Let's talk about the TOGAF method

What enterprise architecture processes will the TOGAF methodology help to build?

How to choose tools for enterprise architecture development?

What will enable you to manage your architectural practice?

Who will benefit from this webinar:

For business owners

For managers and decision makers for business development

IT directors, CIO, CTO

IT architects, integrators and project managers

Tuesday 1 September 2020

Scalable PHP applications on Azure

Microsoft Ukraine and the Network Technologies Training Center invite development specialists to a special practical training dedicated to building PHP applications based on Windows Azure.

During the training, you will learn the benefits of using cloud technologies to create web solutions. In the labs, you will gain practical skills in working with Azure for PHP applications.

Audience: The training is aimed at developers and architects of PHP applications who are looking for a flexible, reliable and scalable platform for their applications.

Required level of preparation: Basic knowledge of PHP and the basics of SQL. Desire to understand cloud technologies using Azure as an example, to gain knowledge on using Azure for PHP projects aws certified solutions architect professional salary.

Trainer: 

Coach Ivan Mosev

Ivan Mosev is an architect and developer with over 9 years of experience in PHP and Python development. For the last 7 years he has been developing web applications in PHP. During this time, he has gone from a simple developer to a technical director. Ivan is always interested in new things in PHP, not forgetting to look around, and also experimenting with Python (Django, ladon, kivy). Naturally, he does not disregard the client side - at the moment he is interested in the development of web applications optimized for mobile platforms using JavaScript and HTML5 capabilities. Has successfully applied various XP and Scrum practices in his projects. Has been using test-driven development since 2006. He is an adherent of engineering practices and strives to convey his point of view to customers and other developers. Author of the training "TDD in PHP", who has been teaching at the XP Injection training center for 2 years. Since 2011, he has been actively using cloud solutions in his projects.

Dates and times: March 29, 30, April 6, 2013 from 9:30 am to 5:30 pm. Duration of the event - one day

Cost of participation: Free, subject to prior registration

Venue: Kiev, st. Degtyarevskaya 48, office 411, Training Center "Network Technologies"

Contents * :

First Steps - Deploying a Website on Azure

You will develop a simple PHP / MySQL application from scratch and configure it to be delivered to the cloud using Git in Azure Web Sites.

Using Azure Components

Learn how to use Azure Core Components in your applications. Storage services will be considered: SQL Azure, Storage Table, Storage Blob, Storage Queue; Windows Azure Caching. 

What is a Cloud Service and how to create it on PHP?

In this part, you will learn what a Cloud Service is and how it differs from Web Sites. We'll show you when to use cloud services. You will move the site you created earlier to the cloud service, learn how to create applications with multiple roles and configure the interaction between them. 

Diagnose and Manage Services in Azure

Learn to use the Azure Diagnostic Service. Based on its data, you can scale your application by changing the number of virtual machines depending on the load.

Restrictions: Participation of students is not provided. No more than one representative can participate from 1 company

* For practical tasks:

You need to come to the event with your laptop , on which your usual text editor or development environment, Git, should be installed.

Each listener will need a valid Windows Azure test account. You can create such an account in advance by simply following the instructions on the Windows Azure website . To do this, you need a credit card with the ability to make payments via the Internet to verify your identity.

Registering an account in Windows Azure

Open http://www.windowsazure.com in your browser , click on the “Free trial” link in the upper right corner, and then click on the “Try it free” link.

You need a Microsoft account to register. If you don't have one, you can create one using the link “Don't have a Microsoft account? Sign up now ”in the black box to the left of the login form.

Once you've signed in with your Microsoft account, follow the registration instructions.

Please provide your current information when registering as you will need to print, sign, and send a service agreement to Microsoft for final registration.

Enter your credit card details.

Upon completion of registration, you should be able to access your personal Windows Azure portal

Attention: money will be withdrawn from the card only if you switch to the Pay-To-Go payment model in your Windows Azure profile settings. The card must be available for work via the Internet and a limit must be set on it.

If you don't have a bank card or you can't make payments on the Internet through it, you can still activate the Windows Azure trial - using a virtual card. A virtual card (usually Visa, Master Card) can be created using various services, for example, Yandex.Money, Qiwi wallet, etc.

What You Can Model with the Heat Transfer Module

Conduction, Convection, and Radiation Analyses The Heat Transfer Module can be used to study the three types of heat transfer in detail, exp...